Next Article in Journal
Mapping GENERIC Hydrodynamics into Carter’s Multifluid Theory
Next Article in Special Issue
Comparison of Selected Numerical Methods for Solving Integro-Differential Equations with the Cauchy Kernel
Previous Article in Journal
On Some Aspects of the Courant-Type Algebroids, the Related Coadjoint Orbits and Integrable Systems
Previous Article in Special Issue
A Proposed DEA Window Analysis for Assessing Efficiency from Asymmetry Dynamic Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modifying Hellwig’s Method for Multi-Criteria Decision-Making with Mahalanobis Distance for Addressing Asymmetrical Relationships

Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland
Symmetry 2024, 16(1), 77; https://doi.org/10.3390/sym16010077
Submission received: 16 December 2023 / Revised: 4 January 2024 / Accepted: 4 January 2024 / Published: 6 January 2024
(This article belongs to the Special Issue Symmetric and Asymmetric Data in Solution Models, Part II)

Abstract

:
Hellwig’s method is a multi-criteria decision-making technique designed to facilitate the ranking of alternatives based on their proximity to the ideal solution. Typically, this approach calculates distances using the Euclidean norm, assuming implicitly that the considered criteria are independent. However, in real-world situations, the assumption of criteria independence is rarely met. The paper aims to propose an extension of Hellwig’s method by incorporating the Mahalanobis distance. Substituting the Euclidean distance with the Mahalanobis distance has proven to be effective in handling correlations among criteria, especially in the context of asymmetrical relationships between criteria. Subsequently, we investigate the impact of the Euclidean and Mahalanobis distance measures on the several variants of Hellwig procedures, analyzing examples based on various illustrative data with 10 alternatives and 4 criteria. Additionally, we examine the influence of three normalization formulas in Hellwig’s aggregation procedures. The investigation results indicate that both the distance measure and normalization formulas have some impact on the final rankings. The evaluation and ranking of alternatives using the Euclidean distance measure are influenced by the normalization formula, albeit to a limited extent. In contrast, the Mahalanobis distance-based Hellwig’s method remains unaffected by the choice of normalization formulas. The study concludes that the ranking of alternatives is strongly dependent on the distance measure employed, whether it is Euclidean or Mahalanobis. The Mahalanobis distance-based Hellwig method is deemed a valuable tool for decision-makers in real-life situations. It enables the evaluation of alternatives by considering interactions between criteria, providing a more comprehensive perspective for decision-making.

1. Introduction

Multi-criteria decision-making (MCDM) methods are a collection of techniques designed to address complex problems that involve the evaluation and ranking of alternatives based on multiple criteria, which may sometimes conflict with each other [1,2]. These methods are widely used in various fields [3], including business [4], engineering [5], environmental science [6], sustainability [7], and public policy [8], among others. The goal is to provide decision-makers with a systematic and structured approach to making choices when faced with a range of alternatives. Among them, there is a class of techniques based on aggregation formulas incorporating reference solutions, such as TOPSIS (Technique for Ordering Preferences by Similarity to Ideal Solution) [9], Hellwig’s method [10], VIKOR (VlseKriterijuska Optimizacija I Komoromisno Resenje) [11], DAPR [12], BWM (the Best-Worst Method) [13], or BIPOLAR [14].
MCDM methods typically involve several steps to determine the overall preference value for each alternative, as follows:
  • Normalization: This step involves transforming performance ratings into a standardized unit scale.
  • Weights determination: This process involves assigning weights to the criteria based on their relative importance in the decision-making process.
  • Distance Measure: This step calculates the distance between the alternatives and reference points, providing a measure of their dissimilarity or similarity.
  • Aggregation Formula: Aggregation involves combining the normalized values, weights, and distance measures to obtain an overall preference value for each alternative.
The paper focuses on Hellwig’s method [10] based on the measurement distances from the alternative to the ideal solution. Two critical aspects of this method have been scrutinized: the distance measure and normalization formula.
The aims of the paper are twofold. Firstly, it introduces an extension of Hellwig’s method, namely the Mahalanobis distance-based Hellwig method (HM). The classical Hellwig method (H) relies on Euclidean distance, assuming implicitly that the criteria are independent. However, real-life situations may not always align with this assumption. Therefore, it is necessary to adapt the technique to the new situation. The Mahalanobis distance is employed to measure the distance between the ideal and alternatives, taking into consideration the dependence among criteria. While the Euclidean distance presupposes independence among variables, the Mahalanobis distance considers the covariance structure, making it more appropriate for datasets with correlated or asymmetrically distributed variables.
Secondly, we specifically investigate the impact of the distance measure (Euclidean vs. Mahalanobis) and the normalization formula in Hellwig’s measure. Various normalization methods have been proposed in the literature [1,9,15] that can be employed within MCDM. The article by Jahan and Edwards [15] undertakes a comparative analysis of six normalization techniques within multi-criteria decision-making methods. For our comparative analysis, we employed three well-known normalization procedures: vector normalization, linear scale transformation (Max-Min method), and linear scale transformation (Sum method).
Several authors have investigated how alternative normalization procedures can influence the ranking of alternatives obtained through MCDM methods [16,17,18,19,20,21,22,23]. We analyze and compare results derived from examples utilizing different variants of Hellwig’s method, taking into account two distance measures and three normalization formulas. This analysis is conducted using illustrative data comprising 10 alternatives and 4 criteria.
The rest of the paper is organized as follows: In Section 2, we briefly outline the concept of Mahalanobis distance and its application in multi-criteria analyses. In Section 3, the classical and extended Hellwig methods are presented. In Section 4, five illustrative examples are investigated concerning distance measures (Euclidean and Mahalanobis) and normalization formulas (vector normalization, min-max method, sum method) with differences in the dependence between criteria. The paper finishes with a conclusion.

2. Mahalanobis Distance and Multi-Criteria Analyses

The Mahalanobis distance, first proposed by Mahalanobis in 1936 [24], is a statistical metric for measuring distance with particular applicability in tasks such as classification, clustering, and multi-criteria decision-making. This distance is based on the separation between two points within a multi-dimensional space, based on the covariance among different variables. The covariance matrix incorporated in the distance measure calculation represents the interrelationships and interdependencies among variables. When the covariance matrix is equal to the identity matrix, the Mahalanobis distance simplifies to the Euclidean distance. The more precise description, calculation, and comparison of Euclidean distance and Mahalanobis distance can be found in [25]. Studies [26,27,28,29] are devoted to the Mahalanobis distance and its properties in the context of multicriteria analysis.
Multi-criteria methods based on Mahalanobis and Euclidean distances find widespread application in data analysis. The Mahalanobis distance finds utility in several MCDM approaches, including TOPSIS [26,27,28,29,30,31], TODIM (an acronym in Portuguese for Interactive and Multicriteria Decision Making) [32], or other decision-making problems [33,34,35,36]. The Mahalanobis distance, incorporating correlations with diverse criteria, empowers us to proficiently address the asymmetrical relationships among criteria. It aids decision-makers in evaluating alternatives based on their preferences and goals, taking into account the interaction between criteria.

3. Mahalanobis Distance-Based Hellwig Method

3.1. The Hellwig’s Framework—A Short Literature Review

Hellwig’s method [10], originally proposed by Hellwig in 1968, has undergone several modifications to address real-life problems. In his pioneering work [10], Hellwig introduced the concept of the development measure based on the pattern of economic development based on the most favorable values for each criterion. This method allows for determining the ranking of objects described in the multidimensional space by calculating the distances between the pattern of development and the objects. This concept has been applied to assess differences and similarities among various countries regarding qualified staff, corresponding to the economic development level of each country.
Hellwig’s method is particularly popular as a linear ordering technique in Polish literature, especially in the field of economic research. It is worth noting that the number of citations has been steadily increasing, particularly due to numerous publications in English. It has also gained recognition among international researchers as a multi-criteria method based on a reference point. According to Google Scholar (Harzing’s Publish or Perish 8 software as of 1 January 2024), the paper [10] has been cited 1479 times (26.41 times per year).
Hellwig’s method has been extended to address different problems with crisp data [10] and incorporates fuzzy sets [10], intuitionistic fuzzy sets [37,38,39,40], interval-valued fuzzy sets [41], and oriented fuzzy sets [42]. This method has been applied in various practical contexts, including the circular economy [43], quality of human capital in the EU countries [44], socio-economic region development [45,46,47,48], sustainable development [49,50], quality of life [38,39,42], evaluation negotiation offers [40,42], analysis agriculture development [51,52,53,54], competitive balance of the Italian Football League [55], innovation in UE countries [56], evaluation of theater activity in Poland [57], and selection of locations [58], among others.

3.2. The Hellwig’s Method

Let us assume that we have m alternatives A 1 , A 2 , , A m and n decision criteria C 1 , C 2 , , C n , where x i j denote the criteria value of A i on C j   ( i = 1 , 2 , , m ;   j = 1 ,   2 , , n ).
The Hellwig’s general framework consists of the following steps:
Step 1. Determination of the decision matrix:
D = [ x i j ] ,
where x i j is the value of the j -th criterion for i -th alternative i = 1 , , m , j = 1 , ,   n .
Step 2. Determination of the vector of weights:
w = [ w 1 , , w n ]
where w j > 0   ( j = 1 , ,   n ) is the weight of the criterion C j and j = 1 n w j = 1 .
In the later analyses, we implemented equal weights. However, it should be noted that in the literature, various proposals exist for establishing weights [59,60,61,62,63,64]. Tzeng et al. [65] classifies weighting methods as objective when weights are computed from outcomes and subjective when they depend only on the preferences of decision-makers. The third class is the combination of subjective and objective weighting methods. Da Silva et al. [62] identified and discussed more than 50 methods, of which 49 are subjective, 7 are objective, and others are hybrid.
Step 3. Building the ideal solution (pattern of development):
I = [ x 1 + , , x n + ]
where:
    x j + = { max i x i j     for   benefit   criterion min i x i j   for   cos t   criterion .
for j = 1 , ,   n .
Step 4. Determination of the normalized matrix:
D ¯ = [ x ¯ i j ]
where x ¯ i j   is a normalized value of x i j ( i = 1 , , m , j = 1 , ,   n ) .
We presented here three well-known and frequently used normalization techniques that we later applied for comparison studies [9,19,20]:
  • Vector normalization, which transforms performance ratings into a normalized vector as follows:
x ¯ i j   = x i j i = 1 m ( x i j ) 2
  • Linear scale transformation (Max-Min method) which involves scaling the performance ratings linearly based on the minimum and maximum values observed across criteria.
x ¯ i j   = x i j min i x i j max i x i j min i x i j
  • Linear scale transformation (Sum method), where performance ratings are linearly transformed based on the sum of values across all criteria.
x ¯ i j   = x i j i = 1 m x i j
where x i j is the value of the j -th criterion for i -th alternative i = 1 , , m , j = 1 , ,   n .
Step 5. Building the weighted normalized matrix:
D ˜ = [ x ˜ i j ] ,
where
x ˜ i j = w j x ¯ i j
Step 6. Calculating the distances of i -th alternative A i from the ideal I by using Euclidean or Mahalanobis distance measure
  • Euclidean distance measure ( d E i ) [10]:
d E i ( A i , I ) = E ( A ˜ i , I ˜ ) = j = 1 n ( x ˜ i j x ˜ j + ) 2
where x ˜ i j ,   x ˜ j + are weighted normalized values x i j and x j + , respectively.
  • Mahalanobis distance measure ( d M i 0 ) [29,31]:
d M i ( A i , I ) = M (   A ¯ i ,   I ¯ ) = (   A ¯ i I ¯ ) W C 1 W T (   A ¯ i I ¯ ) T ,
where C is the variance-covariance matrix of the data matrix D ¯ , W = diag ( w 1 , , w n ) is the diagonal matrix, where w 1   , w 2 ,   .   .   .   , w n are the weights assigned to the criteria.
In practical terms, the choice of distance measure depends on the data’s characteristics and the specifics of the multi-criteria analysis. While the Euclidean distance presupposes independence among variables, the Mahalanobis distance considers the covariance structure, making it more appropriate for datasets with correlated or asymmetrically distributed data. The Mahalanobis distance between the alternative and ideal solution is based on the normalized data and the estimated covariance matrix, which represents the relationships and dependencies between criteria.
Step 6. Calculating the Hellwig’s measure H i or Hellwig’s measure based on Mahalanobis distance H M i for the i -th alternative using the formula.
  • Classical approach (H measure based on Euclidean distance):
H i = 1 d E i d 0
where d 0 = d ¯ + 2 S , for d ¯ = 1 m i = 1 m d E i , S = 1 m i = 1 m ( d E i d ¯ ) 2   .
  • Extended approach (HM measure based on Mahalanobis distance):
H M i = 1 d M i d 0 ,
where d 0 = d ¯ + 2 S , for d ¯ = 1 m i = 1 m d M i , S = 1 m i = 1 m ( d M i d ¯ ) 2   .
Step 7. Ranking of objects according to descending H i   or   H M i values.
A higher value of Hellwig’s measure corresponds to a higher ranking position for the respective alternative.
In the paper, Wang and Wang [31] showed the following:
Property 1: 
[31]: The non-singular linear transformation of data doesn’t affect the Mahalanobis distance measure.
Applying Property 1, and considering normalization Formulas (6)–(8) and Formula (14), we deduce the following:
Property 2: 
The Mahalanobis distance-based Hellwig method (HM) is independent of normalization formulas N1, N2, and N3.

4. Numerical Examples

This section compares the procedures and results obtained from the different Hellwig’s methods: the Hellwig method with Euclidean distance based on vector normalization (H1), max-min normalization (H2), and sum normalization (H3), and the Hellwig method with Mahalanobis distance (HM). Let us note that, from property 2, the results of HM methods don’t depend on the formalization formulas N1, N2, and N3. This gives us five variants of Hellwig’s method. The results of variants for Hellwig’s method were compared (a) based on Euclidean distance for different normalization formulas and (b) based on Euclidean distance with Mahalanobis distance measure.
The problem under consideration involves assessing ten alternatives with four benefit criteria. We assumed equal weight for the analyses to concentrate only on the distance measure and normalization formula incorporated in the algorithm. The examples differ in the data and correlations between criteria. To validate the HM method and examine the relationship between the criteria, we utilize the Pearson correlation coefficient. Additionally, the correlation between results obtained from different variants of Hellwig’s method is analyzed using both the Spearman and Pearson coefficients. The interpretation absolute value of the Pearson coefficient or Spearman coefficient is as follows: [0,0.1)—negligible; [0.1,0.40)—weak; [0.4, 0.7) moderate; [0.7,0.9) strong; [0.9,1] very strong.
Example 1. 
(negligible or weak correlation between criteria).
Table 1 displays the data and correlation matrix among the criteria in Example 1. In this case, a negligible or weak correlation is evident between the criteria. The highest Pearson correlation exists between criterion C3 and C4 (0.116), followed by C3 and C2 (0.103). All other Pearson coefficients are below 0.100.
The ideal based on max and min values (see Formula (3)) has the form:
  I + = [ 12 , 30 , 20 , 30 ] .
The criteria values are normalized using Formulas (6)–(8), respectively. Following this, the Euclidean or Mahalanobis distances between the alternative and the ideal object are calculated using Formulas (11) or (12), respectively.
Finally, the synthetic measure is derived using Formula (13) or (14). The outcomes of various Hellwig’s measures are presented in Table 1.
From Table 2, we can observe that rankings differ for Hellwig’s measures based on the Euclidean distance and various normalization formulas, but these differences are not so evident. Spearman coefficients between Hellwig’s measure based on Euclidean distance are the following: S(H1, H2) = 0.952, S(H1, H3) = 0.939, and S(H2, H3) = 0.915. The Pearson coefficient also confirms a very strong correlation: P(H1, H2) = 0.976, P(H1, H3) = 0.994, and P(H2, H3) = 0.956.
In all variants of Hellwig’s method, the rankings converge for alternatives A3 and A7. For the remaining alternatives, the disparity ranges only from 1 to 2 positions. Additionally, Spearman coefficients between the measure HM and other measures are very high: S(H1, HM) = 0.952, S(H2, HM) = 0.976, or high S(H3, HM) = 0.891. Similarly, a very strong correlation was observed when comparing those measures using the Pearson coefficient: P(H1, HM) = 0.956, P(H2, HM) = 0.991, and P(H3, HM) = 0.923. The highest concordance for HM is achieved with H2. The graphical representation results of Hellwig’s measures are illustrated in the accompanying Figure 1.
We can observe that in this case, disparities between all variants of Hellwig’s methods are marginal.
Example 2. 
(from weak to very strong correlation between criteria).
Table 3 presents the data and correlation matrix for the criteria in Example 2. In this instance, discrepancies in the correlation coefficients range from 0.136 to 0.992. The strongest Pearson correlation is observed between criterion C3 and C2 (0.992), followed by C3 and C1 (0.881), and C1 and C2 (0.708). Meanwhile, the lowest Pearson coefficients are found between C1 and C4 (0.136).
The outcomes of various Hellwig’s measures obtained in Example 2 are presented in Table 4.
Table 4 indicates that the rankings obtained through the Hellwig procedure and Euclidean distance measure are identical, resulting in S(H1, H2) = S(H1, H3) = S(H2, H3) = 1.000. Also, we observed a very strong correlation between Hi obtained by the Pearson coefficient: P(H1, H2) = 0.993, P(H1, H3) = 0.999, and P(H2, H3) = 0.988.
Distinctions arise when comparing Hellwig’s methods based on Euclidean distance and those based on Mahalanobis distance. Nevertheless, in all cases, the rankings converge for alternatives A3, A7, and A8. Discrepancies for the remaining alternatives range from 1 to 6 positions. The Spearman coefficients between the HM measure reveal moderate relationships among these measures: S(H1, HM) = S(H2, HM) = S(H3, HM) = 0.552. A higher Pearson correlation (moderate or strong) was observed when comparing these measures: P(H1, HM) = 0.709, P(H2, HM) = 0.655, and P(H3, HM) = 0.725. The highest concordance for HM is achieved with H3 (0.725). The graphical representation of Hellwig’s measures is depicted in Figure 2.
Note that Hellwig’s approach, neglecting the interaction between criteria, results in an overestimation of the values for the top-scoring alternatives A3, A7, A8, and A10 while comparing with the HM measure. Conversely, it exhibits an opposite deviation for the low-scoring alternatives A1 and A5.
Example 3. 
(from negligible to very strong correlation between criteria).
Table 5 presents the data and correlation matrix for the criteria in Example 3. In this instance, discrepancies in the correlation coefficients range from 0.088 to 0.907. The strongest Pearson correlation is observed between criterion C3 and C4 (0.907), followed by C3 and C2 (0.676), and C4 and C2 (0.575). Meanwhile, the lowest Pearson coefficient is found between C4 and C1 (0.088).
The outcomes of various Hellwig’s measures obtained in Example 3 are presented in Table 6.
Table 6 indicates that the rankings obtained through the Hellwig procedure and Euclidean distance measure are quite similar, resulting in S(H1, H2) = 0.988, S(H1, H3) = 1, and S(H2, H3) = 0.988. Similarly, the Pearson coefficient shows very strong correlation: P(H1, H2) = 0.998, P(H1, H3) 0.9998, and P(H2, H3) = 0.997.
More distinctions arise when comparing Hellwig’s methods based on Euclidean distance and those based on Mahalanobis distance. In all cases, discrepancies for the alternatives range from 1 to 6 positions. The Spearman coefficients between the HM measure reveal week S(H2, HM) = 0.382 or moderate S(H1, HM) = 0.442 and S(H3, HM) = 0.442 correlation among these measures. A strong Pearson correlation was observed when comparing these measures: P(H1, HM) = 0.755, P(H2, HM) = 0.747, and P(H3, HM) = 0.751. The highest concordance for HM is achieved with H1 (0.755). The graphical representation of Hellwig’s measures is depicted in Figure 3.
Please note that Hellwig’s methods, when utilizing Euclidean distance measurement, lead to an overestimation of values for high-scoring alternatives A3, A8, and A10 while comparing with HM measure. Conversely, it exhibits an opposite deviation for lower-scoring alternatives A1 and A4.
Example 4. 
(strong or very strong correlation between criteria).
Table 7 presents both the data and the correlation matrix for the criteria outlined in Example 4. It is noteworthy that we observe high Pearson correlation coefficients ranging from 0.723 (between C3 and C1 or C4 and C1) to 0.910 (between C4 and C2).
The outcomes of various Hellwig’s measures obtained in Example 4 are presented in Table 8.
Table 8 highlights discrepancies in rankings for Hellwig’s measures based on Euclidean distance and various normalization formulas, though these differences are marginal. Spearman coefficients between Hellwig’s measures using Euclidean distance are as follows: S(H1, H2) = 1, S(H1, H3) = 0.987, and S(H2, H3) = 0.987. Similarly, a very strong correlation is observed for the Pearson coefficient: P(H1, H2) = 0.999, P(H1, H3) = 0.99998, and P(H2, H3) = 0.999.
For alternatives A5, A7, A8, and A10, rankings consistently converge in all cases. Disparities for the remaining alternatives range only from 1 to 2 positions. Moreover, the Spearman coefficients between the HM measure and other classical Hellwig measures are very strong: S(H1, HM) = 0.921, S(H2, HM) = 0.921, and S(H3, HM) = 0.947. Similarly, a high Pearson correlation is observed when comparing these measures: P(H1, HM) = 0.827, P(H2, HM) = 0.829, and P(H3, HM) = 0.827. The highest concordance for HM is achieved with H2 for the Pearson coefficient and H3 for the Spearman coefficient. The graphical representation of the results for Hellwig’s measures is depicted in Figure 4.
It is worth noting that the alternative in the first position, according to the HM measure, has a value of 1. Additionally, Hellwig’s approach, neglecting the interaction between criteria, results in an overestimation of the values for the high-scoring alternatives A2, A3, A5, and A7 when compared with the HM measure. Conversely, the low-scoring alternative A1 is underestimated according to the HM measure.
Example 5. 
(moderate and strong correlation between criteria).
Table 9 presents both the data and the correlation matrix for the criteria outlined in Example 5. The Pearson coefficient varies from 0.656 (between C4 and C1) to 0.747 (between C4 and C3).
The outcomes of various Hellwig’s measures obtained in Example 5 are presented in Table 10.
Table 10 highlights discrepancies in rankings for Hellwig’s measures based on Euclidean distance and various normalization formulas. Spearman coefficients between Hellwig’s measures using Euclidean distance are identical: S(H1, H2) = S(H1, H3) = S(H2, H3) = 1, which denotes these same rank ordering alternatives. Also, a very strong correlation is observed for the Pearson coefficient: P(H1, H2) = 0.9996, P(H1, H3) = 0.99997, and P(H2, H3) = 0.9996, though these differences in rating are minimal.
For alternatives A1, A5, and A8, rankings consistently converge in all cases. Disparities for the remaining alternatives range only from 1 to 3 positions. Moreover, the Spearman coefficients between the HM measure and other classical Hellwig measures are very strong: S(H1, HM) = 0.842, S(H2, HM) = 0.842, and S(H3, HM) = 0.842. Similarly, a high Pearson correlation is observed when comparing these measures: P(H1, HM) = 0.821, P(H2, HM) = 0.812, and P(H3, HM) = 0.819. The highest concordance for HM is achieved with H1. The graphical representation of the results for Hellwig’s measures is depicted in Figure 5.
It is worth noting that Hellweg’s methods based on Euclidean distance, neglecting the interaction between criteria, result in an overestimation of the values for the high-scoring alternatives A2, A3, A5, A7, and A8. Conversely, the low-scoring alternatives A1 and A9 are underestimated when compared to the HM measure.
Table 11 compares the results obtained in the five examples.
The results can be summarized as follows:
Firstly, it should be noted that the normalization formula when the Euclidean distance is implemented has an impact on the final ranking but is only marginal. However, this does not occur with Mahalanobis distance, as the results remain the same regardless of the type of normalization employed.
Secondly, it can be observed that the rankings obtained using classical Hellwig methods based on Euclidean distance and Hellwig methods based on Mahalanobis distance are different when there is a certain dependence within the data. Those results are consistent with other results in the literature [31]. Even in the case of moderate or small relationships between criteria, the ratings obtained by classical Hellwig’s methods and those of HM do not coincide. It is also difficult to say which of the normalization formulas, in the case of the Euclidean-based Hellwig method, gives results more consistent with Mahalanobis distance-based Hellwig method concerning the Pearson coefficient.
Thirdly, we can observe that Hellwig’s method, neglecting the interaction between criteria, results usually in an overestimation of the values for the high-scoring alternatives. Conversely, the low-scoring alternatives are underestimated when compared with their values in the Mahalanobis distance-based Hellwig’s method. It should be noted that these results are consistent with findings in the literature, where TOPSIS methods based on Euclidean and Mahalanobis distances were compared [31].

5. Conclusions

In the paper, we proposed the Mahalanobis distance-based Hellwig method, incorporating dependencies among criteria. We also investigated the impact of the distance measure (Euclidean and Mahalanobis) and normalization (vector normalization, Min-Max method, Sum method) in the several variants of Hellwig’s procedure. We analyze five illustrative examples that differ in relationships between criteria.
Summing up, the contributions of the article include the following:
  • Developing a modification of the Hellwig measure by utilizing the Mahalanobis distance, which considers correlations with different criteria, enables us to effectively account for the asymmetrical relationships between criteria.
  • Investigating the impact of the distance measure and normalization variants of Hellwig procedures for the evaluation and rank ordering of alternatives.
  • Analyzing the impact of the correlation between criteria on the consistency of results obtained using different variants of Hellwig’s method.
The Mahalanobis distance proves valuable when dealing with asymmetric datasets or datasets featuring correlated variables. Asymmetric datasets often exhibit varying degrees of correlation between variables, and the Mahalanobis distance provides a means to adjust for these correlations. In contrast to the Euclidean distance, which assumes independence among variables, the Mahalanobis distance considers variable relationships by incorporating the covariance matrix.
Consequently, this study shows that the multi-criteria HM method, relying on the Mahalanobis distance, proves effective in addressing correlations between criteria—a critical aspect in the context of asymmetric data. This methodology enables a more accurate reflection of the true data structure, mitigating potential errors associated with assuming criteria independence. At the same time, the Euclidean distance may be less suitable for datasets with asymmetric dependencies between criteria. It neglects information regarding correlation and data structure, potentially resulting in inaccuracies when criteria exhibit strong correlation or asymmetric dependencies.
This work acknowledges certain limitations that will serve as subjects for further research. In the paper, the focus was limited to a few examples that served as illustrations of the challenges and consequences associated with the choice of a variant of Hellwig’s method. Further research could delve into considering different normalization techniques to better understand and potentially mitigate their impact on rankings, especially when utilizing Euclidean distance. Future studies may aim to explore and quantify the extent of criteria interdependence, seeking to establish patterns or criteria characteristics that contribute to the divergence in rankings between Hellwig methods based on Euclidean distance and those based on Mahalanobis distance. Future investigations could focus on the interaction effects between criteria, examining the nuances that lead to the overestimation of high-scoring alternatives and the underestimation of low-scoring ones, particularly in the context of variants of Hellwig’s method. It would be beneficial to extend the study to different datasets to assess the generalizability of the observed patterns and to identify any dataset-specific factors that may influence the results. Consideration of comparisons with alternative methods beyond the TOPSIS approach could provide a broader perspective on the performance of Hellwig’s methods and their variations. By addressing these aspects in future research, a more comprehensive understanding of the observed phenomena and potential strategies for improvement or mitigation can be achieved.

Funding

The contribution was supported by the grant WZ/WI-IIT/2/22 from Bialystok University of Technology and founded by the Ministry of Education and Science.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

HHellwig’s method based on Euclidean distance
H_MHellwig’s method based on Mahalanobis distance
H1Hellwig’s method based on Euclidean distance with vector normalization
H2Hellwig’s method based on Euclidean distance with min-max normalization
H3Hellwig’s method based on Euclidean distance with sum normalization
TODIMan acronym in Portuguese for Interactive and Multicriteria Decision-Making
MCDMMulti-criteria decision-making
VIKORVlseKriterijuska Optimizacija I Komoromisno Resenje
TOPSISTechnique for Ordering Preferences by Similarity to Ideal Solution
DARPDistances to Aspiration Reference Point method

References

  1. Figueira; Ehrgott, M.; Greco, S. Multiple Criteria Decision Analysis: State of the Art Surveys; Springer Science + Business Media: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  2. Roy, B. Multicriteria Methodology for Decision Aiding; Kluwer Academic Publisher: Dordrecht, The Netherlands, 1996. [Google Scholar]
  3. Basílio, M.P.; Pereira, V.; Costa, H.G.; Santos, M.; Ghosh, A. A Systematic Review of the Applications of Multi-Criteria Decision Aid Methods (1977–2022). Electronics 2022, 11, 1720. [Google Scholar] [CrossRef]
  4. Yalcin, A.S.; Kilic, H.S.; Delen, D. The Use of Multi-Criteria Decision-Making Methods in Business Analytics: A Comprehensive Literature Review. Technol. Forecast. Soc. Change 2022, 174, 121193. [Google Scholar] [CrossRef]
  5. Štilić, A.; Puška, A. Integrating Multi-Criteria Decision-Making Methods with Sustainable Engineering: A Comprehensive Review of Current Practices. Eng 2023, 4, 1536–1549. [Google Scholar] [CrossRef]
  6. Cegan, J.C.; Filion, A.M.; Keisler, J.M.; Linkov, I. Trends and Applications of Multi-Criteria Decision Analysis in Environmental Sciences: Literature Review. Environ. Syst. Decis. 2017, 37, 123–133. [Google Scholar] [CrossRef]
  7. Diaz-Balteiro, L.; González-Pachón, J.; Romero, C. Measuring Systems Sustainability with Multi-Criteria Methods: A Critical Review. Eur. J. Oper. Res. 2017, 258, 607–616. [Google Scholar] [CrossRef]
  8. Kaya, İ.; Çolak, M.; Terzi, F. A Comprehensive Review of Fuzzy Multi Criteria Decision Making Methodologies for Energy Policy Making. Energy Strategy Rev. 2019, 24, 207–228. [Google Scholar] [CrossRef]
  9. Hwang, C.-L.; Yoon, K. (Eds.) Methods for Multiple Attribute Decision Making; Lecture Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1981; ISBN 978-3-642-48318-9. [Google Scholar]
  10. Hellwig, Z. Zastosowanie Metody Taksonomicznej Do Typologicznego Podziału Krajów Ze Względu Na Poziom Ich Rozwoju Oraz Zasoby i Strukturę Wykwalifikowanych Kadr [Application of the Taxonomic Method to the Typological Division of Countries According to the Level of Their Development and the Resources and Structure of Qualified Personnel]. Przegląd Statystyczny 1968, 4, 307–326. [Google Scholar]
  11. Opricovic, S.; Tzeng, G.-H. Compromise Solution by MCDM Methods: A Comparative Analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
  12. Roszkowska, E.; Filipowicz-Chomko, M.; Wachowicz, T. Using Individual and Common Reference Points to Measure the Performance of Alternatives in Multiple Criteria Evaluation. Oper. Res. Decis. 2020, 30, 77–96. [Google Scholar] [CrossRef]
  13. Rezaei, J. Best-Worst Multi-Criteria Decision-Making Method. Omega 2015, 53, 49–57. [Google Scholar] [CrossRef]
  14. Konarzewska-Gubała, E. Bipolar: Multiple Criteria Decision Aid Using Bipolar Refernce System. LAMSADE Cashier Doc. 1989, 56. [Google Scholar]
  15. Jahan, A.; Edwards, K.L. A State-of-the-Art Survey on the Influence of Normalization Techniques in Ranking: Improving the Materials Selection Process in Engineering Design. Mater. Des. 2014, 65, 335–342. [Google Scholar] [CrossRef]
  16. Çelen, A. Comparative Analysis of Normalization Procedures in TOPSIS Method: With an Application to Turkish Deposit Banking Market. Informatica 2014, 25, 185–208. [Google Scholar] [CrossRef]
  17. Chakraborty, S.; Yeh, C.-H. A Simulation Based Comparative Study of Normalization Procedures in Multiattribute Decision Making. In Proceedings of the 6th Conference on 6th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases, Corfu Island, Greece, 16–19 February 2007; Citeseer: State College, PA, USA, 2007; Volume 6, pp. 102–109. [Google Scholar]
  18. Chakraborty, S.; Yeh, C.-H. A Simulation Comparison of Normalization Procedures for TOPSIS. In Proceedings of the 2009 International Conference on Computers and Industrial Engineering (CIE39), Troyes, France, 6–9 July 2009; IEEE, Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 2009; pp. 1815–1820. [Google Scholar]
  19. Milani, A.S.; Shanian, A.; Madoliat, R.; Nemes, J.A. The Effect of Normalization Norms in Multiple Attribute Decision Making Models: A Case Study in Gear Material Selection. Struct. Multidiscip. Optim. 2005, 29, 312–318. [Google Scholar] [CrossRef]
  20. Palczewski, K.; Sałabun, W. Influence of Various Normalization Methods in PROMETHEE II: An Empirical Study on the Selection of the Airport Location. Procedia Comput. Sci. 2019, 159, 2051–2060. [Google Scholar] [CrossRef]
  21. Pavličić, D. Normalization Affects the Results of MADM Methods. Yugosl. J. Oper. Res. 2001, 11, 251–265. [Google Scholar]
  22. Vafaei, N.; Ribeiro, R.A.; Camarinha-Matos, L.M. Normalization Techniques for Multi-Criteria Decision Making: Analytical Hierarchy Process Case Study. In Technological Innovation for Cyber-Physical Systems, Proceedings of the 7th IFIP 5.5/SOCOLNET Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2016, Costa de Caparica, Portugal, 11–13 April 2016; Camarinha-Matos, L.M., Falcão, A.J., Vafaei, N., Najdi, S., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 261–269. [Google Scholar]
  23. Zavadskas, E.K.; Zakarevicius, A.; Antucheviciene, J. Evaluation of Ranking Accuracy in Multi-Criteria Decisions. Informatica 2006, 17, 601–618. [Google Scholar] [CrossRef]
  24. Mahalanobis, P.C. On the Generalised Distance in Statistics. Proc. Natl. Inst. Sci. 1936, 2, 49–55. [Google Scholar]
  25. Ghojogh, B.; Ghodsi, A.; Karray, F.; Crowley, M. Spectral, Probabilistic, and Deep Metric Learning: Tutorial and Survey. arXiv 2022, arXiv:2201.09267. [Google Scholar]
  26. Liu, D.; Qi, X.; Qiang, F.; Li, M.; Zhu, W.; Zhang, L.; Abrar Faiz, M.; Khan, M.I.; Li, T.; Cui, S. A Resilience Evaluation Method for a Combined Regional Agricultural Water and Soil Resource System Based on Weighted Mahalanobis Distance and a Gray-TOPSIS Model. J. Clean. Prod. 2019, 229, 667–679. [Google Scholar] [CrossRef]
  27. Ponce, R.V.; Alcaraz, J.L.G. Evaluation of Technology Using TOPSIS in Presence of Multi-Collinearity in Attributes: Why Use the Mahalanobis Distance? Rev. Fac. Ing. Univ. Antioq. 2013, 31–42. [Google Scholar] [CrossRef]
  28. Antuchevičienė, J.; Zavadskas, E.K.; Zakarevičius, A. Multiple Criteria Construction Management Decisions Considering Relations between Criteria. Technol. Econ. Dev. Econ. 2010, 16, 109–125. [Google Scholar] [CrossRef]
  29. Wang, Z.-X.; Li, D.-D.; Zheng, H.-H. The External Performance Appraisal of China Energy Regulation: An Empirical Study Using a TOPSIS Method Based on Entropy Weight and Mahalanobis Distance. Int. J. Environ. Res. Public Health 2018, 15, 236. [Google Scholar] [CrossRef] [PubMed]
  30. Chang, C.-H.; Lin, J.-J.; Lin, J.-H.; Chiang, M.-C. Domestic Open-End Equity Mutual Fund Performance Evaluation Using Extended TOPSIS Method with Different Distance Approaches. Expert Syst. Appl. 2010, 37, 4642–4649. [Google Scholar] [CrossRef]
  31. Wang, Z.-X.; Wang, Y.-Y. Evaluation of the Provincial Competitiveness of the Chinese High-Tech Industry Using an Improved TOPSIS Method. Expert Syst. Appl. 2014, 41, 2824–2831. [Google Scholar] [CrossRef]
  32. Ozmen, M. Logistics Competitiveness of OECD Countries Using an Improved TODIM Method. Sādhanā 2019, 44, 108. [Google Scholar] [CrossRef]
  33. Wasid, M.; Ali, R. Multi-Criteria Clustering-Based Recommendation Using Mahalanobis Distance. Int. J. Reason. -Based Intell. Syst. 2020, 12, 96. [Google Scholar] [CrossRef]
  34. Dong, H.; Yang, K.; Bai, G. Evaluation of TPGU Using Entropy—Improved TOPSIS—GRA Method in China. PLoS ONE 2022, 17, e0260974. [Google Scholar] [CrossRef]
  35. Xiang, S.; Nie, F.; Zhang, C. Learning a Mahalanobis Distance Metric for Data Clustering and Classification. Pattern Recognit. 2008, 41, 3600–3612. [Google Scholar] [CrossRef]
  36. Ghosh-Dastidar, S.; Adeli, H. Wavelet-Clustering-Neural Network Model for Freeway Incident Detection. Comput. Aided Civ. Infrastruct. Eng. 2003, 18, 325–338. [Google Scholar] [CrossRef]
  37. Jefmański, B. Intuitionistic Fuzzy Synthetic Measure for Ordinal Data. In Clasification and Data Analysis, Proceedings of the Conference of the Section on Classification and Data Analysis of the Polish Statistical Association, Szczecin, Poland, 18–20 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 53–72. [Google Scholar]
  38. Jefmański, B.; Roszkowska, E.; Kusterka-Jefmańska, M. Intuitionistic Fuzzy Synthetic Measure on the Basis of Survey Responses and Aggregated Ordinal Data. Entropy 2021, 23, 1636. [Google Scholar] [CrossRef] [PubMed]
  39. Kusterka-Jefmańska, M.; Jefmański, B.; Roszkowska, E. Application of the Intuitionistic Fuzzy Synthetic Measure in the Subjective Quality of Life Measurement Based on Survey Data. In Modern Classification and Data Analysis, Proceedings of the Conference of the Section on Classification and Data Analysis of the Polish Statistical Association, Poznań, Poland, 8–10 September 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 243–261. [Google Scholar]
  40. Roszkowska, E. The Intuitionistic Fuzzy Framework for Evaluation and Rank Ordering the Negotiation Offers. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation, Proceedings of the International Conference on Intelligent and Fuzzy Systems INFUS 2021, Istanbul, Turkey, 24–26 August 2021; Lecture Notes in Networks and Systems; Kahraman, C., Cebi, S., Onar, S., Oztaysi, B., Tolga, A.C., Sari, I.U., Eds.; Springer: Cham, Switzerland, 2021; Volume 308, pp. 58–65. [Google Scholar]
  41. Roszkowska, E.; Jefmański, B. Interval-Valued Intuitionistic Fuzzy Synthetic Measure (I-VIFSM) Based on Hellwig’s Approach in the Analysis of Survey Data. Mathematics 2021, 9, 201. [Google Scholar] [CrossRef]
  42. Roszkowska, E.; Wachowicz, T.; Filipowicz-Chomko, M.; Łyczkowska-Hanćkowiak, A. The Extended Linguistic Hellwig’s Methods Based on Oriented Fuzzy Numbers and Their Application to the Evaluation of Negotiation Offers. Entropy 2022, 24, 1617. [Google Scholar] [CrossRef] [PubMed]
  43. Mazur-Wierzbicka, E. Towards Circular Economy—A Comparative Analysis of the Countries of the European Union. Resources 2021, 10, 49. [Google Scholar] [CrossRef]
  44. Balcerzak, A.P. Multiple-Criteria Evaluation of Quality of Human Capital in the European Union Countries. Econ. Sociol. 2016, 9, 11–26. [Google Scholar] [CrossRef]
  45. Łuczak, A.; Wysocki, F. Rozmyta Wielokryterialna Metoda Hellwiga Porządkowania Liniowego Obiektów [Fuzzy Multi-Criteria Hellwig’s Method of Linear Ordering of Objects]. Pr. Nauk. Akad. Ekon. We Wrocławiu Taksonomia 2007, 14, 330–340. [Google Scholar]
  46. Golejewska, A. A Comparative Analysis of the Socio-Economic Potential of Polish Regions. Stud. Ind. Geogr. Comm. Pol. Geogr. Soc. 2016, 30, 7–22. [Google Scholar] [CrossRef]
  47. Barska, A.; Jędrzejczak-Gas, J.; Wyrwa, J. Poland on the Path towards Sustainable Development—A Multidimensional Comparative Analysis of the Socio-Economic Development of Polish Regions. Sustainability 2022, 14, 10319. [Google Scholar] [CrossRef]
  48. Jędrzejczak-Gas, J.; Barska, A. Assessment of the Economic Development of Polish Regions in the Context of the Implementation of the Concept of Sustainable Development—Taxonomic Analysis. Eur. J. Sustain. Dev. 2019, 8, 222. [Google Scholar] [CrossRef]
  49. Iwacewicz-Orłowska, A.; Sokołowska, D. Ranking of EU Countries in Terms of the Value of Environmental Governance Indicators in 2010 and 2015. Ekon. Sr. Econ. Environ. 2018, 66, 13. [Google Scholar]
  50. Sompolska-Rzechuła, A. Selection of the Optimal Way of Linear Ordering of Objects: Case of Sustainable Development in EU Countries. Stat. Stat. Econ. J. 2021, 101, 24–36. [Google Scholar]
  51. Reiff, M.; Surmanová, K.; Balcerzak, A.P.; Pietrzak, M.B. Multiple Criteria Analysis of European Union Agriculture. J. Int. Stud. 2016, 9, 62–74. [Google Scholar] [CrossRef]
  52. Gostkowski, M.; Koszela, G. Application of the Linear Ordering Methods to Analysis of the Agricultural Market in Poland. Metod. Ilościowe W Badaniach Ekon. 2019, 20, 167–177. [Google Scholar] [CrossRef]
  53. Wysocki, F. Metody Taksonomiczne w Rozpoznawaniu Typów Ekonomicznych Rolnictwa i Obszarów Wiejskich [Taxonomic Methods in Recognizing Economic Types of Agriculture and Rural Areas]; Wydawnictwo Uniwersytetu Przyrodniczego w Poznaniu: Poznan, Poland, 2010; 399p. [Google Scholar]
  54. Krukowski, A.; Nowak, A.; Różańska-Boczula, M. Evaluation of Agriculture Development in the Member States of the European Union in the Years 2007–2015. In Proceedings of the 31st International Business Information Management Association Conference, Milan, Italy, 25–26 August 2018. [Google Scholar]
  55. Di Domizio, M. The Competitive Balance in the Italian Football League: A Taxonomic Approach; Department of Communication, University of Teramo: Teramo, Italy, 2008. [Google Scholar]
  56. Elżbieta Roszko-Wójtowicz, E.; Grzelak, M.M. The Use of Selected Methods of Linear Ordering to Assess the Innovation Performance of the European Union Member States. Econ. Environ. Stud. 2019, 19, 9–30. [Google Scholar]
  57. Gałecka, M.; Smolny, K. Evaluation of Theater Activity Using Hellwig’s Method. Optim. Econ. Stud. 2018, 38–50. [Google Scholar] [CrossRef]
  58. Dmytrów, K. Comparison of Several Linear Ordering Methods for Selection of Locations in Order-Picking by Means of the Simulation Methods. Acta Univ. Lodz. Folia Oecon. 2018, 5, 81–96. [Google Scholar] [CrossRef]
  59. Ahn, B.S.; Park, K.S. Comparing Methods for Multiattribute Decision Making with Ordinal Weights. Comput. Oper. Res. 2008, 35, 1660–1670. [Google Scholar] [CrossRef]
  60. Ayan, B.; Abacıoğlu, S.; Basilio, M.P. A Comprehensive Review of the Novel Weighting Methods for Multi-Criteria Decision-Making. Information 2023, 14, 285. [Google Scholar] [CrossRef]
  61. Choo, E.U.; Schoner, B.; Wedley, W.C. Interpretation of Criteria Weights in Multicriteria Decision Making. Comput. Ind. Eng. 1999, 37, 527–541. [Google Scholar] [CrossRef]
  62. Da Silva, F.F.; Souza, C.L.M.; Silva, F.F.; Costa, H.G.; da Hora, H.R.M.; Erthal, M., Jr. Elicitation of Criteria Weights for Multicriteria Models: Bibliometrics, Typologies, Characteristics and Applications. Braz. J. Oper. Prod. Manag. 2021, 18, 1–28. [Google Scholar] [CrossRef]
  63. Roszkowska, E. Rank Ordering Criteria Weighting Methods—A Comparative Overview. Optim. Econ. Stud. 2013, 5, 14–33. [Google Scholar] [CrossRef]
  64. Zardari, N.H.; Ahmed, K.; Shirazi, S.M.; Yusop, Z.B. Weighting Methods and Their Effects on Multi-Criteria Decision Making Model Outcomes in Water Resources Management; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  65. Tzeng, G.-H.; Chen, T.-Y.; Wang, J.-C. A Weight-Assessing Method with Habitual Domains. Eur. J. Oper. Res. 1998, 110, 342–367. [Google Scholar] [CrossRef]
Figure 1. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 1.
Figure 1. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 1.
Symmetry 16 00077 g001
Figure 2. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 2.
Figure 2. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 2.
Symmetry 16 00077 g002
Figure 3. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 3.
Figure 3. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 3.
Symmetry 16 00077 g003
Figure 4. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 4.
Figure 4. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 4.
Symmetry 16 00077 g004
Figure 5. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 5.
Figure 5. Graphical representation ranking of alternatives obtained by different variants of Hellwig’s methods in Example 5.
Symmetry 16 00077 g005
Table 1. Data and correlation matrix for Example 1.
Table 1. Data and correlation matrix for Example 1.
AlternativeC1C2C3C4 Correlation Matrix
A111876 C1C2C3C4
A2321010 C11.0000.0890.0030.038
A35301530 C20.0891.0000.103−0.003
A4311520 C30.0030.1031.0000.116
A5810208 C40.038−0.0030.1161.000
A6220105
A7104625
A81225102
A955158
A106259
Ideal solution12302030
Legend: Absolute value of the Pearson coefficient.
[0,0.1) [0.1,0.2)[0.2,0.3)[0.3,0.4)[0.4,0.5)[0.5,0.6)[0.6,0.7)[0.7,0.8)[0.8,0.9)[0.9,1.0)1
negligibleweakmoderatestrongvery strong
Table 2. The distance values, measure values, and rank-ordering of alternatives obtained by different variants of Hellwig’s method (Example 1).
Table 2. The distance values, measure values, and rank-ordering of alternatives obtained by different variants of Hellwig’s method (Example 1).
AlternativedEH1Value
H1
Range
H1
dEH2Value
H2
Range
H2
dEH3Value
H3
Range
H3
dM Value
HM
Range
HM
A10.2110.16880.4080.13990.0800.20282.3330.09610
A20.2180.141100.4000.15580.0860.140102.1770.1578
A30.0920.63910.1800.62010.0340.66311.1230.5651
A40.1940.23550.3450.27060.0780.22271.8720.2755
A50.1610.36620.2770.41520.0640.35531.5840.3872
A60.1970.22570.3700.21970.0750.25252.1160.1807
A70.1650.35240.3300.30340.0650.34941.8190.2964
A80.1620.36330.3040.35830.0620.38021.7410.3263
A90.1950.23460.3420.27650.0770.22661.8800.2726
A100.2170.14490.4180.117100.0850.14892.2770.1189
do0.254 0.473 0.100 2.582
Table 3. Data and correlation matrix for Example 2.
Table 3. Data and correlation matrix for Example 2.
AlternativeC1C2C3C4 Correlation Matrix
A11435 C1C2C3C4
A24101210 C11.0000.7080.8810.136
A35201333 C20.7081.0000.9220.350
A4312920 C30.8810.9221.0000.200
A52228 C40.1360.3500.2001.000
A62865
A710161625
A81220202
A9312910
A10624189
Ideal12202025
Legend: Absolute value of the Pearson coefficient.
[0,0.1) [0.1,0.2)[0.2,0.3)[0.3,0.4)[0.4,0.5)[0.5,0.6)[0.6,0.7)[0.7,0.8)[0.8,0.9)[0.9,1.0)1
negligibleweakmoderatestrongvery strong
Table 4. The distance values, measure values and rank-ordering of alternatives obtained by different variants of Hellwig’s measure (Example 2).
Table 4. The distance values, measure values and rank-ordering of alternatives obtained by different variants of Hellwig’s measure (Example 2).
AlternativedEH1ValueRangedEH2ValueRangedEH3ValueRangedMValueRange
H1H1H2H2H3H3HMHM
A10.2550.117100.4700.113100.0970.119101.9590.2188
A20.1820.37060.3240.38860.0700.36562.3640.05610
A30.1060.63220.1920.63820.0410.63021.1410.5442
A40.1680.42050.3080.41950.0640.42151.5100.3973
A50.2480.14390.4660.12190.0930.15191.8000.2816
A60.2310.20180.4170.21480.0880.19881.8910.2457
A70.0700.75810.1330.75010.0260.76210.8550.6591
A80.1560.45940.2540.52140.0620.44141.5940.3634
A90.1920.33470.3440.35170.0740.32971.7120.3165
A100.1450.49930.2380.55030.0570.48331.9720.2129
do0.289 0.530 0.110 2.504
Table 5. Data and correlation matrix for Example 3.
Table 5. Data and correlation matrix for Example 3.
AlternativeC1C2C3C4 Correlation Matrix
A11225 C1C2C3C4
A2461010 C11.0000.5010.3280.088
A35232433 C20.5011.0000.6760.575
A431610 C30.3280.6761.0000.907
A521048 C40.0880.5750.9071.000
A64786
A71061215
A8122086
A936710
A1068106
Ideal12232433
Legend: Absolute value of the Pearson coefficient.
[0,0.1) [0.1,0.2)[0.2,0.3)[0.3,0.4)[0.4,0.5)[0.5,0.6)[0.6,0.7)[0.7,0.8)[0.8,0.9)[0.9,1.0)1
negligibleweakmoderatestrongvery strong
Table 6. The distance values, measure values, and rank-ordering of alternatives obtained by different variants of Hellwig’s measure (Example 3).
Table 6. The distance values, measure values, and rank-ordering of alternatives obtained by different variants of Hellwig’s measure (Example 3).
AlternativedEHIValueRangedEH2ValueRangedEH3ValueRangedMValueRange
H1H1H2H2H3H3HMHM
A10.3100.092100.4940.084100.1200.095102.2230.1358
A20.2330.31850.3710.31250.0900.31652.0010.2216
A30.0920.73010.1590.70510.0350.73511.3600.4712
A40.2720.20490.4340.19690.1050.20391.8480.2813
A50.2630.23180.4180.22580.1010.23782.1460.1657
A60.2510.26670.3970.26560.0970.26672.2390.1299
A70.1850.46020.2890.46420.0720.45521.3200.4861
A80.1990.41930.3040.43730.0760.42131.9000.2615
A90.2500.26960.3980.26270.0960.27161.8570.2784
A100.2310.32540.3620.32940.0890.32342.2820.11210
do0.342 0.540 0.132 2.570
Table 7. Data and correlation matrix for Example 4.
Table 7. Data and correlation matrix for Example 4.
AlternativeC1C2C3C4 Correlation Matrix
A12436 C1C2C3C4
A2510910 C11.0000.7300.7230.723
A37899 C20.7301.0000.8010.910
A436510 C30.7230.8011.0000.748
A56101013 C40.7230.9100.7481.000
A63646
A7612614
A88121016
A93646
A106537
Ideal8121016
Legend: Absolute value of the Pearson coefficient.
[0,0.1) [0.1,0.2)[0.2,0.3)[0.3,0.4)[0.4,0.5)[0.5,0.6)[0.6,0.7)[0.7,0.8)[0.8,0.9)[0.9,1.0)1
negligibleweakmoderatestrongvery strong
Table 8. The distance values, measure values, and rank-ordering of alternatives obtained by different variants of Hellwig’s measure (Example 4).
Table 8. The distance values, measure values, and rank-ordering of alternatives obtained by different variants of Hellwig’s measure (Example 4).
AlternativedEH1ValueRangedEH2ValueRangedEH3ValueRangedMValueRange
H1H1H2H2H3H3HMHM
A10.1620.157100.5000.162100.0550.157101.5520.2898
A20.0680.64540.2080.65140.0230.64751.4460.3386
A30.0680.64450.2220.62850.0230.64841.3240.3944
A40.1190.37860.3650.38960.0410.37861.3260.3925
A50.0420.78020.1280.78520.0140.78120.8100.6292
A60.1400.27380.4320.27580.0470.27381.5730.2809
A70.0570.70330.1730.71030.0200.70031.3120.3993
A80.0001.00010.0001.00010.0001.00010.0001.0001
A90.1400.27380.4320.27580.0470.27381.5730.289
A100.1290.33170.4100.31370.0440.3371.5440.2937
do0.192 0.597 0.065 2.183
Table 9. Data and correlation matrix for Example 5.
Table 9. Data and correlation matrix for Example 5.
AlternativeC1C2C3C4 Correlation Matrix
A12436 C1C2C3C4
A2510910 C11.0000.7200.6560.670
A37899 C20.7201.0000.7110.731
A436510 C30.6560.7111.0000.747
A5691013 C40.6700.7310.7471.000
A64645
A761269
A8812816
A93636
A106537
Ideal8121016
Legend: Absolute value of the Pearson coefficient.
[0,0.1) [0.1,0.2)[0.2,0.3)[0.3,0.4)[0.4,0.5)[0.5,0.6)[0.6,0.7)[0.7,0.8)[0.8,0.9)[0.9,1.0)1
negligibleweakmoderatestrongvery strong
Table 10. The distance values, measure values, and rank-ordering of alternatives are obtained by different variants of Hellwig’s measure (Example 5).
Table 10. The distance values, measure values, and rank-ordering of alternatives are obtained by different variants of Hellwig’s measure (Example 5).
AlternativedEH1ValueRangedEH2ValueRangedEH3ValueRangedMValueRange
H1H1H2H2H3H3HMHM
A10.1660.135100.4890.134100.0560.136101.6570.18410
A20.0700.63630.1980.64830.0240.63831.3030.3584
A30.0720.62840.2100.62940.0240.63141.4510.2855
A40.1220.36660.3590.36460.0410.36661.3010.3593
A50.0480.75020.1430.74720.0160.75220.8890.5622
A60.1420.26180.4140.26780.0480.26281.6280.1989
A70.0810.58150.2290.59450.0270.58051.5600.2317
A80.0240.87510.0710.87310.0080.87210.5880.7101
A90.1500.21790.4390.22390.0510.21791.5520.2356
A100.1340.30370.3990.29470.0450.30271.6230.2008
do0.192 0.565 0.065 2.030
Table 11. Comparison results obtained in the examples.
Table 11. Comparison results obtained in the examples.
ExampleCorrelation between
Criteria
Relationships between
Hi Measures
Relationships between
Hi and HM Measure
Example 1Negligible or weekSpearman: very strongSpearman: strong or very strong
Pearson: very strongPearson: very strong
Example 2From weak to very strongSpearman: very strongSpearman: moderate
Pearson: very strongPearson: moderate or strong
Example 3From negligible to very strongSpearman: very strongSpearman: week or moderate
Pearson: very strongPearson: strong
Example 4Strong and very strongSpearman: very strongSpearman: very strong
Pearson: very strongPearson: strong
Example 5Moderate and strongSpearman: very strongSpearman: strong
Pearson: very strongPearson: strong
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roszkowska, E. Modifying Hellwig’s Method for Multi-Criteria Decision-Making with Mahalanobis Distance for Addressing Asymmetrical Relationships. Symmetry 2024, 16, 77. https://doi.org/10.3390/sym16010077

AMA Style

Roszkowska E. Modifying Hellwig’s Method for Multi-Criteria Decision-Making with Mahalanobis Distance for Addressing Asymmetrical Relationships. Symmetry. 2024; 16(1):77. https://doi.org/10.3390/sym16010077

Chicago/Turabian Style

Roszkowska, Ewa. 2024. "Modifying Hellwig’s Method for Multi-Criteria Decision-Making with Mahalanobis Distance for Addressing Asymmetrical Relationships" Symmetry 16, no. 1: 77. https://doi.org/10.3390/sym16010077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop