Article Imprecise Shannon’s Entropy and Multi Attribute Decision Making

Abstract: Finding the appropriate weight for each criterion is one of the main points in Multi Attribute Decision Making (MADM) problems. Shannon’s entropy method is one of the various methods for finding weights discussed in the literature. However, in many real life problems, the data for the decision making processes cannot be measured precisely and there may be some other types of data, for instance, interval data and fuzzy data. The goal of this paper is the extension of the Shannon entropy method for the imprecise data, especially interval and fuzzy data cases. Keywords : multi attribute decision making; entropy; imprecise data MSC Codes: 90B50, 90C29, 90C70 1. Introduction Multiple attribute decision making (MADM) refers to making preference decisions (e.g., evaluation, prioritization, and selection) over the available alternatives that are characterized by multiple, usually conflicting, attributes. The structure of the alternative performance matrix is depicted in Table 1, where


Introduction
Multiple attribute decision making (MADM) refers to making preference decisions (e.g., evaluation, prioritization, and selection) over the available alternatives that are characterized by multiple, usually conflicting, attributes.The structure of the alternative performance matrix is depicted in Table 1, where x ij is the rating of alternative i with respect to criterion j and w j is the weight of criterion j (in this paper, we consider the case that the rating of alternative i with respect to criterion j is non-negative).Since each criterion has a different meaning, it cannot be assumed that they all have equal weights, and as a result, finding the appropriate weight for each criterion is one the main points in MADM.Various methods for finding weights can be found in the literature and most of them can be categorized into two groups: subjective and objective weights.Subjective weights are determined only according to the preference decision makers.The AHP method [1], weighted least squares method [2] and Delphi method [3] belong in this category.The objective methods determine weights by solving mathematical models without any consideration of the decision maker's preferences, for example, the entropy method, multiple objective programming [4,5], principal element analysis [5], etc.Since in the most real problems, the decision maker's expertise and judgment should be taken into account, subjective weighting may be preferable, but when obtaining such reliable subjective weights is difficult, the use of objective weights is useful.One of the objective weighting measures which has been proposed by researchers is the Shannon entropy concept [6].Entropy concept was used in various scientific fields.The concept of Shannon's entropy has an important role in information theory and is used to refer to a general measure of uncertainty.In transportation models, entropy is acted as a measure of dispersal of trips between origin and destinations [7].In physics, the word entropy has important physical implications as the amount of "disorder" of a system [7].Also the entropy associated with an event is a measure of the degree of randomness in the event.Entropy has also been concerned as a measure of fuzziness [8].In MADM the greater the value of the entropy corresponding to an special attribute, which imply the smaller attribute's weight, the less the discriminate power of that attribute in decision making process.
In many real life problems, the data of the decision making processes cannot be measured precisely and there may be some other types of data, for instance interval data and fuzzy data.In other words, the decision maker would prefer to say his/her point of view in these forms rather than a real number because of the uncertainty and the lack of certain data, especially when data are known to lie within bounded variables, or when facing missing data, judgment data, etc.In MADM it is most probable that we confront such a case, so finding a suitable weight is an important problem.It is logical that when data are imprecise, weights be imprecise too.In this paper we present a method for solving MADM problems by entropy method consisting of interval data.In this method the weight, which is obtained for each alternative, will be an interval number.We apply the Sengupta approach mentioned in [9] to compare the interval scores we have found.
This paper has been organized as follows: In Section 2 the MADM problem is presented with interval data.Then Entropy method is extended for the interval data.In the same section we will also show that if all of the alternatives have deterministic data, then the interval entropy weight leads to the usual entropy weight.In Section 3, by using α-level set, we will obtain interval weight for fuzzy MADM problem in different levels of confidence.We will also use the data of an empirical example for more explanation and showing the validation of the proposed method.The final section will be the conclusion.

Method
As noted before, Shannon's entropy is a well known method in obtaining the weights for an MADM problem especially when obtaining a suitable weight based on the preferences and DM experiments are not possible.The original procedure of Shannon's entropy can be expressed in a series of steps: S1: Normalize the decision matrix.
The raw data are normalized to eliminate anomalies with different measurement units and scales.This process transforms different scales and units among various criteria into common measurable units to allow for comparisons of different criteria.

S3: Set
as the degree of diversification.

S4: Set
as the degree of importance of attribute i.Now suppose that determining the exact value of the elements of decision matrix is difficult and, as a result, their values are considered as intervals.The structure of the alternative performance matrix in interval data case is expressed as shown in Table 2, where ] , [ is the rating of alternative i with respect to criterion j, ] , [ u j l j w w is the weight of criterion j: Table 2. Structure of the alternative performance when data are intervalled.
When there is interval data, and considering the fact that the value of each alternative with respect to each criterion can change within a range and have different behaviors, it is logically better that weights change in different situations as well (note that here the DM knows that the exact/real value of a criterion is within its data interval and the probability of each point to be the exact value is the same-in other words, a uniform distribution of the interval data is assumed).Therefore, we try to extend Shannon's entropy for these interval data.

S'1:
The normalized values l ij p and u ij p are calculated as: S'2: Lower bound l i h and upper bound u i h of interval entropy can be obtained by: as the lower and upper bound of interval weight of attribute i. (the basic entropy weight).It means if all of the alternatives have deterministic data, then the interval entropy weight leads to the usual entropy weight.As a result, the entropy weight in the case of interval data as the proposed method is well defined, but if at least one of the numbers is interval, all weights will be in the interval form, even for the criteria with crisp data.The reason is that the final entropy weight is dependent on the degree of diversification ( i d ) of all criteria based upon the forth step of the entropy method

Theorem. The inequality
).So, if a criterion is in the interval form, its degree of diversification will be obtained in the interval form too. Therefore, the weight of crisp criteria will alter based on the alteration of the degree of diversification of an interval criterion in its interval degree of diversification.

Comparing interval weights
After determining the weights in interval form by using the proposed method, we must rank them.In other words, when considering two interval numbers; we want to know which one is "greater" or "smaller".Various methods can be found in the literature for ranking interval data, each of which is based on a certain theory (for example see [9][10][11][12][13][14]).In this paper we use Sengupta's approach [9].As we know, interval data can be shown by their first and last points.But an interval number can be shown by its mid-point and its half-width.Sengupta's approach compares two intervals based upon those.Sengupta and Pal introduced the acceptability function to compare two interval numbers D and E as follows: are the mid-points of interval numbers D and E, and are the half-width of D and E.
) ( Α may be interpreted as the ''first interval to be inferior to the second interval''.This procedure states that between two interval numbers with the same mid-point, the less uncertain interval will be the best choice for both of maximization and minimization purposes.

A numerical example
In this section, the steps of the proposed method are described with a simple example.Suppose that there is an MADM problem with six alternatives and four criteria.Data are presented in Table 3.As we can see, the first criterion is in the crisp form whereas other criteria are intervals.We want to obtain a weight for each criterion by using the proposed approach.In Table 4 the normalized data are presented.In the first row of Table 5, interval entropy of each criterion obtained in the second step (S'2) can be seen.The closer the entropy of a criterion to 1, the less important the criterion.In the second and the third rows of Table 5, the degree of diversification and the weight of each criterion are mentioned.As can be seen, the corresponding weight to the first criterion, which is in the crisp form, is interval.As mentioned before, since other criteria have interval diversification, as their diversification change between intervals, the weight of the first criterion will change within an interval too.Finally we applied Sengupta approaches to rank the criteria.Mid-points and half-widths of interval weights that are used to obtain the final rank of each criterion by using the acceptability function can be seen in raw 4 and raw 5 of Table 5.To determine the rank of criterion C1 for example, we use acceptability functions for intervals corresponding to C1 and C2, C1 and C3 and also C1 and C4.The obtained values are -0.9461898,-0.9729971 and 0.35378407, respectively.We see that the rank of C1 is just better than the rank of C4.Therefore, C1 locates at rank 3. Other criteria can be ranked in the same way.For problems with more complexity, with a small program (for example Excel) we can determine the rank of each criterion.In the last row of Table 5, the rank of each criterion can be seen.

Fuzzy Shannon's entropy based on α -level sets
In real decision making problem, a lot for data happens to be of fuzzy type.The structure of the alternative performance matrix in fuzzy data case is expressed as shown in Table 6, where ij x ~is the rating of alternative i with respect to criterion j, and j w ~ is the weight of criterion j: Table 6.Structure of the alternative performance in the case of fuzzy data.In the case where all fuzzy data are expressed in triangular and trapezoidal fuzzy numbers, several approaches have been proposed to deal with the fuzzy data.Fuzzy data will be transformed into interval data in this paper by using the α-level sets.
Definition (α-level sets).The α-level set of a fuzzy variable ij x ~ is defined by a set of elements that belong to the fuzzy variable ij x ~ with membership of at least α i.e., ( ) { } The α-level set can also be expressed in the following interval form: . By setting different levels of confidence, namely α − 1 , fuzzy data are accordingly , which are all intervals.Now by using the proposed method in the previous section, we can obtain an interval weight for each α-level set.We name the entropy weight for the i'th fuzzy criterion in α-level as [ ] α u i l i w w , .Now by using every interval ranking method, we can rank all fuzzy criteria in every α-level set.In what follows, we find the weights for the criteria of a real MADM problem.

Empirical example
Consider Table 7 in which there are seven alternatives and 16 criteria.Data are taken from [15].Data are fuzzy triangular numbers in the form of (a,b,c), where the first, second and third components display the left, center and right side of the related numbers.We used the proposed method for five level sets, 0.1, 0.3, 0.5, 0.7 and 0.9.The obtained weights and the corresponding rank of each criterion for different α-level sets are presented in Table 8.As can be seen in Table 8, the rankings under different α-levels might be quite different.In this situation, the overall ranking cannot be easily observed.In order to generate an overall ranking, choosing a trade-off between the precision and the confidence is suggested.A higher α means precision of the interval chosen and a lower α means a higher confidence in the result.A risk-averse assessor or DM might choose a high alpha because of strong dislike of uncertainty (fuzziness), while a risk-taking assessor or DM might prefer a low alpha because of seeking risk.In addition, weighted averaging of the interval weights by using alpha as weight is suggested.After obtaining the weighted average, ranks can be obtained by different approaches, for example Sengupta approach.

S' 3 :
Set the lower and the upper bound of the interval of diversification l i d and u i d as the degree of diversification as follows: By using the definition of l i h and u i h in the second step of the proposed approach, it is clear that the weight of i'th criterion obtained from interval entropy method.Notice that if all of the alternatives have deterministic data, then we have

Table 1 .
Structure of the alternative performance matrix.

Table 3 .
The data of alternatives.

Table 4 .
The normalized rates.

Table 5 .
Entropy, degree of diversification, weight and rank.

Table 7 .
The data of alternatives (Empirical Example).

Table 8 .
The weight and rank for the 16 criteria under different α-level settings.