1. Introduction
Multi-criteria decision-making (MCDM) methods are a collection of techniques designed to address complex problems that involve the evaluation and ranking of alternatives based on multiple criteria, which may sometimes conflict with each other [
1,
2]. These methods are widely used in various fields [
3], including business [
4], engineering [
5], environmental science [
6], sustainability [
7], and public policy [
8], among others. The goal is to provide decision-makers with a systematic and structured approach to making choices when faced with a range of alternatives. Among them, there is a class of techniques based on aggregation formulas incorporating reference solutions, such as TOPSIS (Technique for Ordering Preferences by Similarity to Ideal Solution) [
9], Hellwig’s method [
10], VIKOR (VlseKriterijuska Optimizacija I Komoromisno Resenje) [
11], DAPR [
12], BWM (the Best-Worst Method) [
13], or BIPOLAR [
14].
MCDM methods typically involve several steps to determine the overall preference value for each alternative, as follows:
Normalization: This step involves transforming performance ratings into a standardized unit scale.
Weights determination: This process involves assigning weights to the criteria based on their relative importance in the decision-making process.
Distance Measure: This step calculates the distance between the alternatives and reference points, providing a measure of their dissimilarity or similarity.
Aggregation Formula: Aggregation involves combining the normalized values, weights, and distance measures to obtain an overall preference value for each alternative.
The paper focuses on Hellwig’s method [
10] based on the measurement distances from the alternative to the ideal solution. Two critical aspects of this method have been scrutinized: the distance measure and normalization formula.
The aims of the paper are twofold. Firstly, it introduces an extension of Hellwig’s method, namely the Mahalanobis distance-based Hellwig method (HM). The classical Hellwig method (H) relies on Euclidean distance, assuming implicitly that the criteria are independent. However, real-life situations may not always align with this assumption. Therefore, it is necessary to adapt the technique to the new situation. The Mahalanobis distance is employed to measure the distance between the ideal and alternatives, taking into consideration the dependence among criteria. While the Euclidean distance presupposes independence among variables, the Mahalanobis distance considers the covariance structure, making it more appropriate for datasets with correlated or asymmetrically distributed variables.
Secondly, we specifically investigate the impact of the distance measure (Euclidean vs. Mahalanobis) and the normalization formula in Hellwig’s measure. Various normalization methods have been proposed in the literature [
1,
9,
15] that can be employed within MCDM. The article by Jahan and Edwards [
15] undertakes a comparative analysis of six normalization techniques within multi-criteria decision-making methods. For our comparative analysis, we employed three well-known normalization procedures: vector normalization, linear scale transformation (Max-Min method), and linear scale transformation (Sum method).
Several authors have investigated how alternative normalization procedures can influence the ranking of alternatives obtained through MCDM methods [
16,
17,
18,
19,
20,
21,
22,
23]. We analyze and compare results derived from examples utilizing different variants of Hellwig’s method, taking into account two distance measures and three normalization formulas. This analysis is conducted using illustrative data comprising 10 alternatives and 4 criteria.
The rest of the paper is organized as follows: In
Section 2, we briefly outline the concept of Mahalanobis distance and its application in multi-criteria analyses. In
Section 3, the classical and extended Hellwig methods are presented. In
Section 4, five illustrative examples are investigated concerning distance measures (Euclidean and Mahalanobis) and normalization formulas (vector normalization, min-max method, sum method) with differences in the dependence between criteria. The paper finishes with a conclusion.
4. Numerical Examples
This section compares the procedures and results obtained from the different Hellwig’s methods: the Hellwig method with Euclidean distance based on vector normalization (H1), max-min normalization (H2), and sum normalization (H3), and the Hellwig method with Mahalanobis distance (HM). Let us note that, from property 2, the results of HM methods don’t depend on the formalization formulas N1, N2, and N3. This gives us five variants of Hellwig’s method. The results of variants for Hellwig’s method were compared (a) based on Euclidean distance for different normalization formulas and (b) based on Euclidean distance with Mahalanobis distance measure.
The problem under consideration involves assessing ten alternatives with four benefit criteria. We assumed equal weight for the analyses to concentrate only on the distance measure and normalization formula incorporated in the algorithm. The examples differ in the data and correlations between criteria. To validate the HM method and examine the relationship between the criteria, we utilize the Pearson correlation coefficient. Additionally, the correlation between results obtained from different variants of Hellwig’s method is analyzed using both the Spearman and Pearson coefficients. The interpretation absolute value of the Pearson coefficient or Spearman coefficient is as follows: [0,0.1)—negligible; [0.1,0.40)—weak; [0.4, 0.7) moderate; [0.7,0.9) strong; [0.9,1] very strong.
Example 1. (negligible or weak correlation between criteria).
Table 1 displays the data and correlation matrix among the criteria in Example 1. In this case, a negligible or weak correlation is evident between the criteria. The highest Pearson correlation exists between criterion C3 and C4 (0.116), followed by C3 and C2 (0.103). All other Pearson coefficients are below 0.100.
The ideal based on max and min values (see Formula (3)) has the form:
The criteria values are normalized using Formulas (6)–(8), respectively. Following this, the Euclidean or Mahalanobis distances between the alternative and the ideal object are calculated using Formulas (11) or (12), respectively.
Finally, the synthetic measure is derived using Formula (13) or (14). The outcomes of various Hellwig’s measures are presented in
Table 1.
From
Table 2, we can observe that rankings differ for Hellwig’s measures based on the Euclidean distance and various normalization formulas, but these differences are not so evident. Spearman coefficients between Hellwig’s measure based on Euclidean distance are the following: S(H1, H2) = 0.952, S(H1, H3) = 0.939, and S(H2, H3) = 0.915. The Pearson coefficient also confirms a very strong correlation: P(H1, H2) = 0.976, P(H1, H3) = 0.994, and P(H2, H3) = 0.956.
In all variants of Hellwig’s method, the rankings converge for alternatives A3 and A7. For the remaining alternatives, the disparity ranges only from 1 to 2 positions. Additionally, Spearman coefficients between the measure HM and other measures are very high: S(H1, HM) = 0.952, S(H2, HM) = 0.976, or high S(H3, HM) = 0.891. Similarly, a very strong correlation was observed when comparing those measures using the Pearson coefficient: P(H1, HM) = 0.956, P(H2, HM) = 0.991, and P(H3, HM) = 0.923. The highest concordance for HM is achieved with H2. The graphical representation results of Hellwig’s measures are illustrated in the accompanying
Figure 1.
We can observe that in this case, disparities between all variants of Hellwig’s methods are marginal.
Example 2. (from weak to very strong correlation between criteria).
Table 3 presents the data and correlation matrix for the criteria in Example 2. In this instance, discrepancies in the correlation coefficients range from 0.136 to 0.992. The strongest Pearson correlation is observed between criterion C3 and C2 (0.992), followed by C3 and C1 (0.881), and C1 and C2 (0.708). Meanwhile, the lowest Pearson coefficients are found between C1 and C4 (0.136).
The outcomes of various Hellwig’s measures obtained in Example 2 are presented in
Table 4.
Table 4 indicates that the rankings obtained through the Hellwig procedure and Euclidean distance measure are identical, resulting in S(H1, H2) = S(H1, H3) = S(H2, H3) = 1.000. Also, we observed a very strong correlation between Hi obtained by the Pearson coefficient: P(H1, H2) = 0.993, P(H1, H3) = 0.999, and P(H2, H3) = 0.988.
Distinctions arise when comparing Hellwig’s methods based on Euclidean distance and those based on Mahalanobis distance. Nevertheless, in all cases, the rankings converge for alternatives A3, A7, and A8. Discrepancies for the remaining alternatives range from 1 to 6 positions. The Spearman coefficients between the HM measure reveal moderate relationships among these measures: S(H1, HM) = S(H2, HM) = S(H3, HM) = 0.552. A higher Pearson correlation (moderate or strong) was observed when comparing these measures: P(H1, HM) = 0.709, P(H2, HM) = 0.655, and P(H3, HM) = 0.725. The highest concordance for HM is achieved with H3 (0.725). The graphical representation of Hellwig’s measures is depicted in
Figure 2.
Note that Hellwig’s approach, neglecting the interaction between criteria, results in an overestimation of the values for the top-scoring alternatives A3, A7, A8, and A10 while comparing with the HM measure. Conversely, it exhibits an opposite deviation for the low-scoring alternatives A1 and A5.
Example 3. (from negligible to very strong correlation between criteria).
Table 5 presents the data and correlation matrix for the criteria in Example 3. In this instance, discrepancies in the correlation coefficients range from 0.088 to 0.907. The strongest Pearson correlation is observed between criterion C3 and C4 (0.907), followed by C3 and C2 (0.676), and C4 and C2 (0.575). Meanwhile, the lowest Pearson coefficient is found between C4 and C1 (0.088).
The outcomes of various Hellwig’s measures obtained in Example 3 are presented in
Table 6.
Table 6 indicates that the rankings obtained through the Hellwig procedure and Euclidean distance measure are quite similar, resulting in S(H1, H2) = 0.988, S(H1, H3) = 1, and S(H2, H3) = 0.988. Similarly, the Pearson coefficient shows very strong correlation: P(H1, H2) = 0.998, P(H1, H3) 0.9998, and P(H2, H3) = 0.997.
More distinctions arise when comparing Hellwig’s methods based on Euclidean distance and those based on Mahalanobis distance. In all cases, discrepancies for the alternatives range from 1 to 6 positions. The Spearman coefficients between the HM measure reveal week S(H2, HM) = 0.382 or moderate S(H1, HM) = 0.442 and S(H3, HM) = 0.442 correlation among these measures. A strong Pearson correlation was observed when comparing these measures: P(H1, HM) = 0.755, P(H2, HM) = 0.747, and P(H3, HM) = 0.751. The highest concordance for HM is achieved with H1 (0.755). The graphical representation of Hellwig’s measures is depicted in
Figure 3.
Please note that Hellwig’s methods, when utilizing Euclidean distance measurement, lead to an overestimation of values for high-scoring alternatives A3, A8, and A10 while comparing with HM measure. Conversely, it exhibits an opposite deviation for lower-scoring alternatives A1 and A4.
Example 4. (strong or very strong correlation between criteria).
Table 7 presents both the data and the correlation matrix for the criteria outlined in Example 4. It is noteworthy that we observe high Pearson correlation coefficients ranging from 0.723 (between C3 and C1 or C4 and C1) to 0.910 (between C4 and C2).
The outcomes of various Hellwig’s measures obtained in Example 4 are presented in
Table 8.
Table 8 highlights discrepancies in rankings for Hellwig’s measures based on Euclidean distance and various normalization formulas, though these differences are marginal. Spearman coefficients between Hellwig’s measures using Euclidean distance are as follows: S(H1, H2) = 1, S(H1, H3) = 0.987, and S(H2, H3) = 0.987. Similarly, a very strong correlation is observed for the Pearson coefficient: P(H1, H2) = 0.999, P(H1, H3) = 0.99998, and P(H2, H3) = 0.999.
For alternatives A5, A7, A8, and A10, rankings consistently converge in all cases. Disparities for the remaining alternatives range only from 1 to 2 positions. Moreover, the Spearman coefficients between the HM measure and other classical Hellwig measures are very strong: S(H1, HM) = 0.921, S(H2, HM) = 0.921, and S(H3, HM) = 0.947. Similarly, a high Pearson correlation is observed when comparing these measures: P(H1, HM) = 0.827, P(H2, HM) = 0.829, and P(H3, HM) = 0.827. The highest concordance for HM is achieved with H2 for the Pearson coefficient and H3 for the Spearman coefficient. The graphical representation of the results for Hellwig’s measures is depicted in
Figure 4.
It is worth noting that the alternative in the first position, according to the HM measure, has a value of 1. Additionally, Hellwig’s approach, neglecting the interaction between criteria, results in an overestimation of the values for the high-scoring alternatives A2, A3, A5, and A7 when compared with the HM measure. Conversely, the low-scoring alternative A1 is underestimated according to the HM measure.
Example 5. (moderate and strong correlation between criteria).
Table 9 presents both the data and the correlation matrix for the criteria outlined in Example 5. The Pearson coefficient varies from 0.656 (between C4 and C1) to 0.747 (between C4 and C3).
The outcomes of various Hellwig’s measures obtained in Example 5 are presented in
Table 10.
Table 10 highlights discrepancies in rankings for Hellwig’s measures based on Euclidean distance and various normalization formulas. Spearman coefficients between Hellwig’s measures using Euclidean distance are identical: S(H1, H2) = S(H1, H3) = S(H2, H3) = 1, which denotes these same rank ordering alternatives. Also, a very strong correlation is observed for the Pearson coefficient: P(H1, H2) = 0.9996, P(H1, H3) = 0.99997, and P(H2, H3) = 0.9996, though these differences in rating are minimal.
For alternatives A1, A5, and A8, rankings consistently converge in all cases. Disparities for the remaining alternatives range only from 1 to 3 positions. Moreover, the Spearman coefficients between the HM measure and other classical Hellwig measures are very strong: S(H1, HM) = 0.842, S(H2, HM) = 0.842, and S(H3, HM) = 0.842. Similarly, a high Pearson correlation is observed when comparing these measures: P(H1, HM) = 0.821, P(H2, HM) = 0.812, and P(H3, HM) = 0.819. The highest concordance for HM is achieved with H1. The graphical representation of the results for Hellwig’s measures is depicted in
Figure 5.
It is worth noting that Hellweg’s methods based on Euclidean distance, neglecting the interaction between criteria, result in an overestimation of the values for the high-scoring alternatives A2, A3, A5, A7, and A8. Conversely, the low-scoring alternatives A1 and A9 are underestimated when compared to the HM measure.
Table 11 compares the results obtained in the five examples.
The results can be summarized as follows:
Firstly, it should be noted that the normalization formula when the Euclidean distance is implemented has an impact on the final ranking but is only marginal. However, this does not occur with Mahalanobis distance, as the results remain the same regardless of the type of normalization employed.
Secondly, it can be observed that the rankings obtained using classical Hellwig methods based on Euclidean distance and Hellwig methods based on Mahalanobis distance are different when there is a certain dependence within the data. Those results are consistent with other results in the literature [
31]. Even in the case of moderate or small relationships between criteria, the ratings obtained by classical Hellwig’s methods and those of HM do not coincide. It is also difficult to say which of the normalization formulas, in the case of the Euclidean-based Hellwig method, gives results more consistent with Mahalanobis distance-based Hellwig method concerning the Pearson coefficient.
Thirdly, we can observe that Hellwig’s method, neglecting the interaction between criteria, results usually in an overestimation of the values for the high-scoring alternatives. Conversely, the low-scoring alternatives are underestimated when compared with their values in the Mahalanobis distance-based Hellwig’s method. It should be noted that these results are consistent with findings in the literature, where TOPSIS methods based on Euclidean and Mahalanobis distances were compared [
31].