Multi-Attribute Decision-Making Methods as a Part of Mathematical Optimization

: Optimization problems are relevant to various areas of human activity. In di ﬀ erent cases, the problems are solved by applying appropriate optimization methods. A range of optimization problems has resulted in a number of di ﬀ erent methods and algorithms for reaching solutions. One of the problems deals with the decision-making area, which is an optimal option selected from several options of comparison. Multi-Attribute Decision-Making (MADM) methods are widely applied for making the optimal solution, selecting a single option or ranking choices from the most to the least appropriate. This paper is aimed at providing MADM methods as a component of mathematics-based optimization. The theoretical part of the paper presents evaluation criteria of methods as the objective functions. To illustrate the idea, some of the most frequently used methods in practice—Simple Additive Weighting (SAW), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Complex Proportional Assessment Method (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA) and Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE)—were chosen. These methods use a ﬁnite number of explicitly given alternatives. The research literature does not propose the best or most appropriate MADM method for dealing with a speciﬁc task. Thus, several techniques are frequently applied in parallel to make the right decision. Each method di ﬀ ers in the data processing, and therefore the results of MADM methods are obtained on di ﬀ erent scales. The practical part of this paper demonstrates how to combine the results of several applied methods into a single value. This paper proposes a new approach for evaluating that involves merging the results of all applied MADM methods into a single value, taking into account the suitability of the methods for the task to be solved. Taken as a basis is the fact that if a method is more stable to a minor data change, the greater importance (weight) it has for the merged result. This paper proposes an algorithm for determining the stability of MADM methods by applying the statistical simulation method using a sequence of random numbers from the given distribution. This paper shows the di ﬀ erent approaches to normalizing the results of MADM methods. For arranging negative values and making the scales of the results of the methods equal, Weitendorf’s linear normalization and classical and author-proposed transformation techniques have been illustrated in this paper.


Introduction
In a specific activity, a person consciously and intuitively seeks to find the best solutions to emerging problems or tasks. The action of making the best or most effective use of a situation or resource is called optimization. The Simple Additive Weighting (SAW), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Complex Proportional Assessment Method (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA) and Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods applied in this paper have been described in different research papers as Multiple Criteria Decision Making (MCDM) [1][2][3][4][5][6], Multiple Attribute Decision Making (MADM) [7][8][9], Multiple Criteria Decision Analysis (MCDA) [10] or Multi-Attribute Decision Analysis (MADA) [11], and Multi-Criteria Analysis (MCA) [12,13]. Since this work is focused on decision-making and the number of alternatives are explicitly given and finite, the name MADM will be used to define the above-listed methods.
MADM methods are aimed at identifying the most satisfactory of several comparative alternatives or at ranking options according to their relevance in terms of the evaluated objective [14]. The methods are used for selecting the most satisfactory alternative/solution provided that there is no such alternative for which all criteria values are the best.
To solve an optimization problem with classical optimization methods, the function of its objective is fixed, establishing the set of objects to be optimized or the allowable area to be determined. The minimum or maximum values of the function are sought depending on the purpose of the problem being solved. The theoretical part of this work presents MADM methods as a component of mathematical optimization methods, and evaluation criteria for SAW, TOPSIS, COPRAS, MOORA and PROMETHEE methods appear as objective functions, which is a new form of presenting and interpreting methods. To illustrate the idea of this publication, some of the most widely applied MADM methods have been selected. The presented methodology can be transferred to other methods as well.
The judging matrix and the vector of criterion weights are the components of most of MADM methods. The judging matrix covers statistical data or the values of expert evaluation according to the criteria defining the objective [14]. Since the impact of criteria on the outcome of the problem to be solved is different, the significance (weights) of criteria is determined [15]. Criterion weights can be clarified directly or by employing certain weighting methods. The main idea of most of the used MADM methods is merging criterion values and their weights into a single evaluation characteristic (i.e., the summarized criterion of the method). Data on MADM methods are static, and their values do not vary in the problem-solving process.
Most of the assignments solved by people include problems that do not have sufficient numerical data or problems where the investigated objects are impossible to measure. In such cases, the judgment matrix is supplemented by the data obtained from the expert evaluation. Particular focus is switched to selecting experts in a particular field, considering their characteristics related to professional competence, work experience, scientific degree, research activity and the ability to address specific issues in the field given. MADM methods operate in numerical values, although the criteria themselves can be both quantitative and qualitative. The qualitative meanings of criteria, in some cases, facilitate expert evaluation that can be individual when the expert expresses individual opinions independently of other experts or shared and accepted in a group of professionals.
The research literature does not propose the best or most appropriate MADM method for dealing with a specific problem. This question is relevant, and thus there are many research papers focused on determining the stability of the method on the basis that any mathematical model or method can be applied in practice in the case that they remain stable with respect to the applied parameters [16]. A mathematical model is considered to be stable if a small change in the results is consistent with minor variations in the parameters for the model. Multiple MADM methods are applied in most complex decision-making tasks to ensure the accuracy of the final result. In the cases when several MADM methods are used for evaluation, it becomes unclear what results of which method are reliable. This paper proposes a new approach that helps the expert make the right decision. The core of the suggested approach is to apply several MADM methods and to determine the suitability/impact of the employed MADM methods on the problem solved (i.e., to clarify the stability of the method). The final result consists of the estimates of several methods taking into account the weight of the effect of each method. The paper verifies the stability of multi-criteria methods when slightly changing data in the matrix of the initial solution (i.e., expert evaluations and weights of the vector, fixing recurrence frequency of the best alternative to the initial data). Previous papers of the author considered that the higher the number of imitations, the more accurate the evaluation of the stability of the multi-criteria method (i.e., the range of the varying result decreased). A sufficient number of recurrences was established when the result of evaluating MADM stability remained almost unchanged, because 10 5 times could be treated as an adequate number of estimations [17].
The practical part of the paper combines the results of several MADM methods into a single outcome and shows a few ways to normalize results obtained using MADM methods of different scales.

Literature Review
Analytical mathematical optimization problems were solved in as early as the 17th century. The first solution proposed investigating the problem of finding the minimum/maximum and was described by P. Fermat (XVII). Newton developed the method of fluxions. The technique was rediscovered and published in the paper "New Method for the Greatest and the Least" by G. W. Leibniz in 1684. Further, efforts exerted by Euler and Lagrange led to working out solutions to extreme tasks. In 1824, Fourier created the first algorithm for solving linear arithmetic constraints [18]. This algorithm made further advances in the field, such as the main duality theorem, the Farkas lemma, the Motzkin transfer theorem and others [19]. The traditionally employed model of optimization includes linear programming, sequential quadratic programming, nonlinear programming, and dynamic programming [20]. In 1939, the first formulation of the linear programming problem and the method for solving this problem were proposed by Leonid Kantorovich. In 1947, Danzig created the simplex method that was effectively used to solve linear programming problems [21]. Derivative-based stochastic optimization began with a seminal paper by Robbins and Monro (1951) that launched the entire field [22]. Richard Bellman developed the dynamic programming method in the 1950s [23].
Decision-making methods based on optimality were introduced by Pareto in 1896 and applied to a wide range of problems. The Multi-Objective Evolutionary Algorithm (MOEA) [24] is used to find the optimal Pareto solutions for specific problems [25]. Keeney and Raiffa [26] and Fishburn [27] introduced the Multi-Attribute Value Theory (MAVT), the Multi-Attribute Value Analysis (MAVA) and Multi-Attribute Utility Theory (MAUT) methods. Data envelopment analysis (DEA), introduced by Charnes et al., is a linear programming method for measuring the efficiency of multiple decision-making units by analysing the problems of multiple inputs and outputs [28].
Multiple criteria decision-making methods evolved from operations research theory by solving problems such as the development of computational and mathematical tools to support the subjective assessment of performance criteria by decision-makers [29]. MADM, as a discipline, has a relatively short history of approximately 30 years. Its role has increased significantly in different application areas along with the development of new methods and improved old methods in particular.
A work by Hwang and Yoon presented a plethora of methods for solving MADM problems [7]: Methods for Cardinal Preference of Attribute over Linear Assignment method [30], Simple Additive Weighting (SAW) method [31], Hierarchical Additive Weighting method, ELECTRE method, and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [7]. The most familiar and commonly used is the SAW method reflecting the idea of multi-criteria methods-merging criterion values and their weights into a single value [32].
Peng and Wang proposed the concept of hesitant uncertain linguistic Z-numbers (HULZNs) and presented the Multi-Criteria Group Decision-Making (MCGDM) method by integrating power operators employing the Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) [5] model. Peng and Wang merged the Multi-Objective Optimization by Ratio Analysis plus the Full Multiplicative From (MULTIMOORA) and power aggregation operators in order to create a comprehensive decision model for MCGDM problems with Z-numbers [33]. Outranking ELECTRE [34] and PROMETHEE [35] methods were described in the publication on multiple criteria decision analysis by Belton and Stewart in 2001 [10]. Opricovic and Tzeng conducted a comparative analysis of VIKOR and TOPSIS methods in 2004 [5,36].
Criterion weights are one of the components of MCMD methods and therefore have a strong impact on the final result [15]. For defining criterion weights, subjective evaluation is the most frequently applied technique when experts examine the significance of criteria, although objective and generalized estimates are known [49]. Weights can be set directly or using weighting methods such as Analytic Hierarchy Process (AHP) [50,51], Fuzzy Analytic Hierarchy Process (FAHP) [52,53], SWARA [54], Criterion Impact LOSs (CILOS) [55], Integrated Determination of Objective Criteria Weights (IDOCRIW) [14,56], etc. Recalculation of the weights of criteria under the Bayes theorem is proposed in the paper [56]. Regardless of the method, the principles of evaluation remain to take the position that the weight of the most important criterion is the highest. It was agreed that the sum of all weights should be equal to 1 [1]. Any measurement scale may be used for evaluations.
Based on a study by Sabaei et al., the most common decision management methods used in Scopus database publications are AHP, ELECTRE, and PROMETHEE [57]. The early 1990s witnessed the shift of focus toward methods that consider indifference and ensure the transparency of analysis processes [58]. The concept of sensitivity analysis in decision theory means the effective use and implementation of quantitative decision models, the purpose of which is to assess the stability of an optimal solution under changes in parameters, the impact of the lack of controllability of specific parameters and the need for the precise estimation of parameter values [63]. The first significant works on sensitivity analysis in the field of decision-making were done by Evans [63], who formulated the concepts of sensitivity analysis in linear programming to develop a formal approach applicable to classical decision-theoretic problems [64] and presented two simple computational procedures for sensitivity analysis of additive multi-attribute value models that yielded variations in attribute weights. Insua [65] developed a conceptual framework for sensitivity analysis in discrete multi-criteria decision-making, which allowed simultaneous variations in judgmental data and applied to many paradigms for decision analysis. Janssen [66] discussed the sensitivity of the rankings of alternatives to the overall uncertainty in scores, and priorities were analyzed using the Monte Carlo approach. Butler [67] presented a simulation approach allowing simultaneous changes in the weights and generating results that could be easily analyzed to provide insights into multi-criteria model recommendations statistically.
Wolters and Mareschal [68] proposed three novel types of sensitivity analysis focused on and elaborated for the PROMETHEE methods. Masuda [69] studied the sensitivity problems of the AHP method. In his work, he concentrated on how changes in the entire columns of the decision-making matrix might affect the values of the composite priorities of alternatives. Triantaphyllou [70] presented a methodology for performing a sensitivity analysis of the weights of decision criteria and identifying the performance values of the alternatives expressed in terms of decision criteria. The estimation of the effect/impact of uncertainty in the SAW method was performed by Podvezko [71], who determined the points of varying ranges of criterion weights of the investigated process, evaluated compatibility level and stability of expert opinions and assessed the effect of uncertainty on ranking comparable objects employing the imitation method. The impact of varying weights on the final result in the SAW method was studied by Zavadskas [72] and Memariani [73]. The influence of the elements of the decision matrix on the final ranking result was analyzed by Alinezhad [74]. The effect of the importance of criterion weights on the results of the TOPSIS method was studied by Yu [75] and Alinezhada [76]. Misra focused on a comparison of AHP, Decision-Making Trial and Evaluation Laboratory (DEMATEL), COPRAS, and TOPSIS methods [77]. Podvezko [32] compared SAW, TOPSIS and COPRAS methods. Moghassem [78] increased and decreased all criterion weights by 5%, 10%, 15%, and 20% in analyzing the sensitivity of TOPSIS and VIKOR. Hsu conducted the sensitivity analysis of TOPSIS by increasing and decreasing the top three weights by 10% [79].

MADM Methods as a Component of Mathematics-Based Optimization Techniques
To formulate the optimization problem, the paper presents a set of optimized elements and the measure of goodness of its elements (quality estimates).
The optimization problem takes the form of where f (x) : D → Y is the objective function or criterion; D is the set or permissible area of the optimized objects; and opt is the minimum or maximum value of function f (x).
The literature provides a number of different classifications of optimization problems. Typically, specific decision-making methods are created for each category of problems according to the characteristics of that particular class. Weights do not vary in SAW, TOPSIS, COPRAS, MOORA and PROMETHEE methods. Weights are determined using subjective or objective weighting methods. The number of comparable alternatives is finite in these methods.
MADM methods can be presented as a mathematical optimization problem as follows: where ν is the number of the MADM method. The merit of alternatives i = 1, . . . , n is evaluated according to criteria j = 1, . . . , m, and the values are defined as r = r ij . The influence of criteria on the evaluation result is different, and therefore the vector ω = (ω j ), j = 1, . . . , m, of the weights of criteria is determined, thus defining the importance of criteria.
where the values of r ij are normalized according to the formula: Mathematics 2019, 7, 915 6 of 21 When the values of criteria are multi-dimensional, they are transformed. The values of the maximized criteria are calculated according to the formula: Then, the highest value of r ij is equal to 1. The value of minimized criteria r i is correspondingly calculated according to the formula: Then, the lowest value of r ij is equal to 1. For standard criteria, the principle of simple linear scalarization is applied.

TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution
The method refers to vector data normalization: where r ij is the normalized value of the jth criterion for the ith alternative. The vector of the best R + value and the worst R − value of criteria (ideal alternative) are calculated as where J 1 is a set of indices of the maximized criteria, J 2 is a set of indices of the minimized criteria, and r − j ( r + j ) is the worst (best) value of the jth criterion. The basic principle of the method is to find an alternative at the shortest overall distance from the best values of criteria and the maximum distance from the worst values. The method does not require the rearrangement of the minimized (maximized) criteria to the maximized (minimized) ones.

PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation
where i = 1, 2, . . . , n; m j=1 ω j = 1; d j A i , A g = r ij − r gj is the difference of alternatives A i and A g of inequality values r ij and r gj of the jth criterion R j ; and p h (d) = p h d j A i, A g is the value of the hth priority function for the selected jth criterion.
The PROMETHEE method uses the basic ideas of other methods like combining the values of weights and normalized criteria into a single estimate (SAW method) and the pairwise comparison of criteria (AHP method). Instead of the normalized criteria values, the value of the priority function p h (d), 0 ≤ p h (d) ≤ 1 is used, and all possible pairs of alternatives for each of the criteria are compared with each other. A higher value of p h (d) corresponds to a better alternative; if the difference d is lower than the established critical value q, then p h (d) = 0. If d is greater than the maximum limit s for the values of criteria, then p h (d) = 1.
In practice, six (h = 6) functions of priorities p h (d) are applied [3,80]. The priority function of the usual criterion is equal to The function chart is shown in Figure 1a.
The priority function of the U-shape criterion is equal to The function chart is shown in Figure 1b. The priority function of the V-shape criterion (linear priority) is equal to The function chart is shown in Figure 1c. The priority function of the level criterion is equal to The function chart is shown in Figure 1d. The priority function of the V-shape with indifference criterion is equal to The function chart is shown in Figure 1e. The priority function of the Gaussian criterion is equal to The function chart is shown in Figure 1f. As mentioned above, PROMETHEE, similarly to the other multi-criteria decision methods, applies the idea of the SAW method instead of the normalized values r ij of criteria and uses the values of the functions p h (d) of specifically selected priorities, where the argument d is the difference between the values of the criterion. The function chart is shown in Figure 1f. As mentioned above, PROMETHEE, similarly to the other multi-criteria decision methods, applies the idea of the SAW method instead of the normalized values ̃ of criteria and uses the values of the functions ℎ ( ) of specifically selected priorities, where the argument is the difference between the values of the criterion.

COPRAS (Complex Proportional
where ω + j (ω −j ) are the maximized (minimized) weights of criteria; and r −ij ( r +ij ) are the normalized values of the minimized (maximized) criteria for each ith alternative. The values of the estimates of alternatives are normalized according to Equation (4). The application of the COPRAS method separately assesses the effect of the minimized and maximized criteria on the result of the carried out evaluation [38,81].
For the value of r ij , vector normalization according to Equation (8) is applied. The initial version of the MOORA method did not take into account the importance of the criteria expressed in weights. The method calculation principle is the sum of the values of the minimized normalized criteria (from g + 1 to m) subtracted from the sum of the maximized normalized alternative criteria (from 1 to g). For developing the MOORA method, Brauers started using the weights of criteria [39]. The improved MOORA method is applied for calculations.
The presented methods have been selected as some of those most frequently applied in practice. Similarly, other familiar criteria such as VIKOR, ELECTRE, and others for evaluating the MADM method can be presented as objective functions.

Experimental Application of the Methodology Merging MADM Methods
The application of a few MADM methods may result in ranking the scale of evaluation results and reported findings, which is not a clear case of what decision should be made. Each method has an individual theoretical basis and logic, and therefore results in differences.
This chapter describes the methodology for merging the results of MADM methods and presents its practical application. The methodology proposes making calculations using several MADM methods and thus merging their results according to the importance of the method for the problem solved into a single value. SAW, COPRAS, TOPSIS, PROMETHEE and MOORA methods are used in the calculations.
To sum up the results of different methods into the single value, normalizing result data beforehand is required. Linear, classical, vector, logarithmic and other normalization techniques are known. Unlike other methods, the results received applying PROMETHEE are both positive and negative numbers. To transform the results of the PROMETHEE method and other MADM techniques to the uniform scale, PROMETHEE result data must be converted into positive values.

Methodology for Merging the Results of MADM Methods
The weight, representing the importance of the MADM method, is defined as Ω ς . The result of the stability of a separate method is defined as S ς and is expressed in percentage.
The weights of methods are normalized in the following way: The best alternative is established as where µ i,ς is the normalized result of the ςth MADM method of the ith alternative. To merge the results of different methods into a single value, normalizing data on the obtained results is required beforehand. Linear, classical, vector, logarithmic and other normalization techniques are known. Unlike other methods, the results received applying PROMETHEE are both positive and negative numbers. To transform the results of the PROMETHEE and other MADM methods to the uniform scale, first, PROMETHEE result data must be converted into positive values.
For handling negative values and making the scales of the results of other methods equal, Wietendorf's [82] linear normalization rearranging data in the range of [0, 1] is suitable: where x tr is the normalized result of the method and x tr ∈ [0, 1], x is the initial obtained result of the method, x min is the lowest value of the results of methods, and x max is the highest value of the results of methods.
Another method for making data on MADM results equal in order to employ classical normalization [83] is as follows: Thus, the results of the PROMETHEE method are transformed into positive numbers beforehand. The transformed value of the evaluation result takes the form of F i , i = 1, . . . , n. The results of F i obtained applying the PROMETHEE method are sorted in ascending order. The lowest result of the transformed method is equal to F 1 = 1. Other transformed values are calculated as follows:

Algorithm for Defining MADM Stability
Any mathematical model or method can be applied in practice provided it is stable in terms of the applied parameters. The stability of MADM is verified by employing the statistical simulation method using a sequence of random numbers from the given distribution.
The algorithm for evaluating the stability of the MADM method is presented in Figure 2. MADM method determines the best alternative i of the initial data and fixes the number of this alternative Iopt. Verifying the stability of multi-criteria methods brings slight changes in vector data in the initial judging matrix (i.e., expert evaluations and weights ). The calculation is made with the newly received values and using the MADM method, thus determining the number of the best alternative newIopt. The counter sk captures the amount of newIopt recurrence with the initial Iopt. As mentioned in the introduction, a sufficient number of cycles to evaluate the stability of the method to the nearest 0.1 was selected with Y = 10 5 . The stability coefficient that fixes the frequency of the recurrence of the best initial alternative is calculated by changing preliminary data. The method is more important for the result of the problem when the stability coefficient is higher.
When no information on the distribution of parameters for MADM methods is available, the uniform distribution is used for generating random values of ̅ from the range [ , ]: MADM method ν determines the best alternative i of the initial data and fixes the number of this alternative Iopt. Verifying the stability of multi-criteria methods brings slight changes in vector data in the initial judging matrix (i.e., expert evaluations r ij and weights w j ). The calculation is made with the newly received values new r ij and new w j using the MADM method, thus determining the number of the best alternative newIopt. The counter sk captures the amount of newIopt recurrence with the initial Iopt. As mentioned in the introduction, a sufficient number of cycles to evaluate the stability of the method to the nearest 0.1 was selected with Y = 10 5 .
The stability coefficient that fixes the frequency of the recurrence of the best initial alternative is calculated by changing preliminary data. The method is more important for the result of the problem when the stability coefficient is higher.
When no information on the distribution of parameters for MADM methods is available, the uniform distribution is used for generating random values of x ς from the range [X, X]: where q ς [0, 1].
The random values of alternate estimates and criterion weights are generated by slightly changing initial data r ij and w i by 10% when q ς ∈ [0, 1]: The variation limits [min r ij , max r ij ] of alternative estimates r ij are determined as Accordingly, the variation limits [min w i , max w i ] of criterion weights w i are equal to By applying the algorithm for verifying the stability of the MADM method (Figure 2), the stability of all multi-criteria decision-making methods described in this paper is checked. The higher the frequency of the reoccurrence of the best alternative, the more stable the method. The proposed method considers the uncertainty of data on expert evaluation and therefore decreases the level of the subjectivity of the conducted evaluation. The evaluation carried out by applying multiple MADM methods allows selecting the result of the most stable method or merging the results of several methods into a single value.

Experimental Application of Merging the Results of MADM Methods
To illustrate the application of the method described in the paper, an example in which the estimates of alternatives differ slightly from each other has been chosen. The experts assessed the quality of the course units taught according to six criteria [17]. The descriptions of criteria, as well as the estimates of weights and course units, are given in Table 1. The mean of alternative estimates (i.e., course units), is in the range of [9.03, 9.34]. Regarding the initial data (Table 1), the calculation has been conducted by applying the SAW (Equation (3)), TOPSIS (Equation (7)), MOORA (Equation (18)), COPRAS (Equation (17)) and PROMETHEE (Equation (10)) methods. Since all criteria are maximized in the problem solved (Table 1), the calculation of the SAW and COPRAS methods coincides [34]. Thus, only the SAW method will be mentioned below in the paper. The calculations of the PROMETHEE method used the function chart of the priority of the V-shape with indifference criterion (Equation (15)  According to the algorithm described above, the stability of the following methods has been determined: SAW 30.7%, TOPSIS 30.9%, MOORA 29.3% and PROMETHEE 26.8%.
The stability of all methods is low due to the similarity of the initial data. Even small variations in the initial data have changed ranking of the best alternative. Having applied Equation (19), the weights of methods are calculated: ΩSAW = 0.2608, ΩTOPSIS = 0.2625, ΩMOORA = 0.249, ΩPROMETHEE = 0.2277 ( Figure 4). The weights of the methods are slightly different, and the most stable is the TOPSIS method.  According to the algorithm described above, the stability of the following methods has been determined: SAW 30.7%, TOPSIS 30.9%, MOORA 29.3% and PROMETHEE 26.8%.
The stability of all methods is low due to the similarity of the initial data. Even small variations in the initial data have changed ranking of the best alternative. Having applied Equation (19), the weights of methods are calculated: Ω SAW = 0.2608, Ω TOPSIS = 0.2625, Ω MOORA = 0.249, Ω PROMETHEE = 0.2277 ( Figure 4). The weights of the methods are slightly different, and the most stable is the TOPSIS method. In order to merge the results of all methods, their estimates need to be unified. Thus, the MADM results are normalized in the range of [0, 1] ( Table 2). Wietendorf's [82] linear normalization is suitable for the results of different scales as well as for the negative values of the PROMETHEE method.  (20) is applied in summing up the estimates of the normalized methods considering their weights. The numerical results are presented in Figure 5. A comparison of the obtained results ( Figure 5) with data provided in Table 1 shows changes in the findings. The weights of criteria had a significant impact on the result. Compared to the ranked results employing all methods, the merged MADM result matched with that determined by applying the TOPSIS method. The latter method had a higher weight (i.e., importance), in the problem solved. Table 2 shows that Wietendorf's (Equation (21)) linear normalization has a disadvantage (i.e., zero estimates of alternatives). The weight of the method does not affect the worst-rated alternative as its result is normalized to the zero value. In order to merge the results of all methods, their estimates need to be unified. Thus, the MADM results are normalized in the range of [0, 1] ( Table 2). Wietendorf's [82] linear normalization is suitable for the results of different scales as well as for the negative values of the PROMETHEE method. Equation (20) is applied in summing up the estimates of the normalized methods considering their weights. The numerical results are presented in Figure 5. A comparison of the obtained results ( Figure 5) with data provided in Table 1 shows changes in the findings. The weights of criteria had a significant impact on the result. Compared to the ranked results employing all methods, the merged MADM result matched with that determined by applying the TOPSIS method. The latter method had a higher weight (i.e., importance), in the problem solved. When the results of two worst-rated alternative methods slightly differ from each other using different normalization, the result may change. Thus, no similar problems are encountered in finding the best alternative.
Another calculation method (i.e., technique for making values equal), involves classical normalization (Equation (22)) and pre-arranging the results of the PROMETHEE method using  Table 2 shows that Wietendorf's (Equation (21)) linear normalization has a disadvantage (i.e., zero estimates of alternatives). The weight of the method does not affect the worst-rated alternative as its result is normalized to the zero value.
When the results of two worst-rated alternative methods slightly differ from each other using different normalization, the result may change. Thus, no similar problems are encountered in finding the best alternative.
Another calculation method (i.e., technique for making values equal), involves classical normalization (Equation (22)) and pre-arranging the results of the PROMETHEE method using Equation (23). The transformed positive results of the PROMETHEE method are 1.5379, 1, 1.4141, 1.0943, and 1.5795. Table 3 shows the re-estimation of the methods using classical normalization [84]. The results of the MADM methods merged using Equation (20) are shown in Figure 6.  The numerical results of the initial data (Table 1) and the merged results following classical ( Figure 6) and linear ( Figure 5) normalization are shown in Figure 7. The numerical results of the initial data (Table 1) and the merged results following classical ( Figure 6) and linear ( Figure 5) normalization are shown in Figure 7. The numerical results of the initial data (Table 1) and the merged results following classical ( Figure 6) and linear ( Figure 5) normalization are shown in Figure 7. Before comparing the obtained information, the results were normalized so that the sum of all estimates of alternatives should be equal to one. The chart shows that the means of the estimates of the initial data differ slightly from each other. The merged results demonstrate that linear normalization leads to significant variations in the outcomes, which is clearly expressed in the evaluation of the fourth alternative. Differences in the results obtained following classical normalization are not significantly expressed in the chart.
The results expressed in ranks are shown in Figures 8 and 9. These charts indicate the mean ranks of the initial data, the ranks of the results of the merged MADM methods (following linear and classical normalization) and the means of the ranks of the results obtained by employing MADM methods. The best alternative is ranked 1, whereas the worst-rated alternative takes 5. Before comparing the obtained information, the results were normalized so that the sum of all estimates of alternatives should be equal to one. The chart shows that the means of the estimates of the initial data differ slightly from each other. The merged results demonstrate that linear normalization leads to significant variations in the outcomes, which is clearly expressed in the evaluation of the fourth alternative. Differences in the results obtained following classical normalization are not significantly expressed in the chart.
The results expressed in ranks are shown in Figures 8 and 9. These charts indicate the mean ranks of the initial data, the ranks of the results of the merged MADM methods (following linear and classical normalization) and the means of the ranks of the results obtained by employing MADM methods. The best alternative is ranked 1, whereas the worst-rated alternative takes 5. The results of the initial data differ from those achieved by evaluating the outcomes of the first and second alternatives. The merged results coincided following linear and classical normalization.       Table 3 shows that the sum of the estimates for each alternative is equal to 1, which facilitates comparing them. A comparison of the ranked results provided in Figures 5 and 6   The results of the initial data differ from those achieved by evaluating the outcomes of the first and second alternatives. The merged results coincided following linear and classical normalization.
The mean values of the results of MADM methods mainly coincided with the merged results of MADM methods. Since the values of the weights of MADM methods Ω are similar to each other (Figure 4), they did not have a significant effect on the final result. The average results of MADM ranks of the first and third alternatives may lead to different interpretations due to their estimates being equal to 1.5 and 2. The combined results have unequivocally identified the best alternative as Alt. 1. Table 3 shows that the sum of the estimates for each alternative is equal to 1, which facilitates comparing them. A comparison of the ranked results provided in Figures 5 and 6 demonstrates that the employed methods of the linear and classical normalization of MADM results have determined all alternatives equally. For comparing the mean values of the initial estimates with the findings obtained using MADM methods, the ranking results have changed due to the effect of criterion weights.

Discussion and Conclusions
The paper has considered MADM methods as an integral part of the mathematical optimization theory. To illustrate the idea, some of the most applicable methods, SAW, TOPSIS, MOORA, PROMETHEE and COPRAS, have been preferred, and their evaluation criteria have been presented as objective functions, although this paper's methodology is not limited to the use of only these methods. Other MADM methods such as VIKOR, ELECTRE, Evaluation Based on Distance from Average Solution (EDAS), etc. can be similarly introduced as objective functions. The forthcoming papers of the author will focus on exploring more extensively the limitations to constraints on the variables of the above-listed and new MADM methods and will concentrate on the properties of the objective functions and their limitations.
The MADM methods introduced in this paper are employed for selecting the best alternative evaluated according to the established criteria. The purpose of classical optimization is analogous to MADM methods presented in the paper, which means finding an optimal solution from several or many possible options. The use of MADM makes sense in comparing alternatives that do not contain any dominant alternatives when considering all evaluation criteria. The data used in the presented MADM methods are not changed by searching for the optimal solution from all available ones. The decision matrix and the vector of criterion weights are static data, and the number of optional alternatives is finite.
Merging the results of the MADM methods in accordance with their importance showed their possibilities in evaluation. There is a large number of MADM methods, and therefore the literature does not provide unambiguous recommendations for the most appropriate one. Therefore, multiple MADM methods are frequently applied in practice. A methodology for merging the results of MADM methods was presented in this paper, based on summing up the normalized MADM results into a single value and considering the methods' stability.
The findings have demonstrated that weights have a significant influence on the result. In order to analyze the influence of the weights of criteria and methods on the obtained result, a problem example was presented in the practical part of the paper, and the averages of evaluating alternatives had little difference between them. Criterion weights have been found to significantly alter the primary outcomes. The established stability of the applied methods did not differ significantly: Ω SAW = 0.2608, Ω TOPSIS = 0.2625, Ω MOORA = 0.249, Ω PROMETHEE = 0.2277. Nevertheless, the influence of the weights of the methods on the result is noticeable. The ranked result obtained employing the TOPSIS method coincided with the ranked composite result, since the TOPSIS method had a greater influence of weight than the rest of the techniques had. The average results of MADM ranks of the first and third alternatives may lead to different interpretations due to their estimates being equal to 1.5 and 2. The combined results have unequivocally identified the best alternative.
Wietendorf's linear normalization is appropriate for rearranging the results of different scales as well as for the negative values of the PROMETHEE method. However, linear normalization has a disadvantage. Applying Wietendorf's linear normalization, the estimate of the worst alternative is converted into zero, and thus the weight of the influence of the method for determining the worst alternative has no effect on the combined result. The result data managed by applying classical normalization are convenient to be compared because the sum of all results is equal to one. In the case of classical normalization, the negative results of the methods require additional data transformation. The author of this paper proposes a method of transforming negative numbers. Hence, the normalization method had no influence on the final combined result in this task.
The article provides a method for verifying the stability of the MADM method, which ensures the validity of the evaluated result. The technique for validating the stability of the MADM method has a wide range of practical usability in different decision-making problems where evaluation is performed by employing several MADM methods. The proposed method considers the uncertainty of data on expert evaluation and therefore decreases the level of the subjectivity of the conducted evaluation. Further papers will focus more intensely on analyzing the sensitivity of fuzzy AHP methods by fluctuating the data and on investigating several algorithms of FAHP methods.