Next Article in Journal
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
Previous Article in Journal
Toward Adaptive Unsupervised and Blind Image Forgery Localization with ViT-VAE and a Gaussian Mixture Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM

School of Mathematics, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(14), 2286; https://doi.org/10.3390/math13142286
Submission received: 20 June 2025 / Revised: 10 July 2025 / Accepted: 14 July 2025 / Published: 16 July 2025

Abstract

Given the evaluation data of all the experts in multi-attribute group decision making, this paper establishes an optimization model for learning and determining expert weights based on minimizing the sum of the differences between the individual evaluation and the overall consistent evaluation results. The paper proves the uniqueness of the solution of the optimization model and rigorously proves that the expert weights obtained by the model have “perfect rationality”, i.e., the weights are inversely proportional to the distance to the “overall consistent scoring point”. Based on the above characteristics, the optimization problem is further transformed into solving a system of nonlinear equations to obtain the expert weights. Finally, numerical experiments are conducted to verify the rationality of the model and the feasibility of transforming the problem into a system of nonlinear equations. Numerical experiments demonstrate that the deviation metric for the expert weights produced by our optimization model is significantly lower than that obtained under equal weighting or the entropy weight method, and it approaches zero. Within numerical tolerance, this confirms the model’s “perfect rationality”. Furthermore, the weights determined by solving the corresponding nonlinear equations coincide exactly with the optimization solution, indicating that a dedicated algorithm grounded in perfect rationality can directly solve the model.

1. Introduction

Multi-criteria decision making (MCDM) is a key branch of decision theory. It can be divided into two categories based on the nature of the solution space as follows: continuous and discrete. Continuous MCDM is addressed using multi-objective decision making (MODM), while discrete MCDM is tackled through multi-attribute decision making (MADM). This paper focuses exclusively on the latter. In the literature, the term MCDM often refers to discrete MCDM (i.e., MADM), and we will use this terminology throughout. Over recent decades, numerous MADM methods have been proposed, including AHP [1,2,3], ANP [4], TOPSIS [5,6,7,8,9], ELECTRE [10,11,12], VIKOR [13,14], and PROMETHEE [15,16,17]. Simultaneously, as the volume of information and influencing factors in decision-making processes has increased, it has become evident that scientific decision making cannot be achieved by an individual alone. This has led to the development of group decision making (GDM), which harnesses collective wisdom for optimal decisions [18,19,20]. Today, multi-attribute group decision making (MAGDM) is widely applied across various fields, including engineering, technology, economics, management, and the military. This paper specifically examines MAGDM.
MAGDM typically involves several key elements as follows:
  • Multiple Alternatives: Before making a decision, the decision maker (DM) must evaluate various alternatives.
  • Multiple Evaluation Criteria (Attributes): Decision makers must identify relevant factors that may affect the decision. These factors may be independent or interrelated, and it is necessary to define and measure the applicable criteria (attributes) before scoring by experts.
  • Allocation of Criteria Weights: Different criteria have varying levels of influence on the decision, necessitating distinct weights for each. Typically, the allocation of criteria weights is normalized [21,22].
  • Allocation of Expert Weights: Since GDM involves a group rather than a single individual, the allocation of expert weights is essential. The influence of each DM on the final decision varies, which requires assigning appropriate weights to each expert.
The determination of weights plays a critical role in MAGDM, making the problem of assigning reasonable weights to both experts and criteria a topic of significant interest. The weights of criteria reflect their relative importance and are crucial to the accuracy of decision results. Currently, methods for determining criteria weights tend to be simple, feasible, and practical. However, expert weight calculation is often complex, and its objectivity and accuracy are sometimes questionable, leaving the issue unresolved. Without any prior knowledge of the experts, a common assumption is that all experts are equally weighted. However, factors such as popularity, title, education, field relevance, and years of experience can all influence an expert’s contribution to the final judgment. Thus, different experts should be assigned distinct weights.
The variation in expert weights in GDM significantly alters the final decision, making the determination of expert weights a central concern in academic research. Generally, expert weight determination methods fall into three categories: subjective weighting, objective weighting, and a combination of both.
Subjective Weighting: In this method, the weights of each expert are predetermined or established through interactions between experts [23,24,25]. While this method is independent of the evaluation results, it requires a high degree of expertise and familiarity among the experts. However, it remains highly subjective and uncertain.
Objective Weighting: This method determines expert weights based on a judgment matrix constructed from expert evaluations of the information. The weights change as evaluation results evolve [26]. For example, [27] addresses the supplier selection problem by integrating expert weights using hesitancy and similarity in evaluations, applying the TOPSIS method to rank suppliers. Similarly, [28] introduces fault detection into FMEA and uses fuzzy theory and D-S evidence theory to allocate expert weights more reasonably, addressing the subjective limitations of traditional FMEA.
Combined Subjective and Objective Weighting: Some studies combine both subjective and objective perspectives. For instance, [29] integrates subjective and objective weight matrices using the minimum entropy criterion, accounting for differing judgments by all DMs. Similarly, ref. [30] combines expert weight coefficients and entropy weighting through fuzzy hierarchical analysis (FAHP), integrating both subjective and objective weights based on criteria characteristics to determine final weights.
This paper proposes an optimization model for determining expert weights in MAGDM, drawing on evaluation data from all experts. The model’s theoretical foundations are rigorously analyzed, and numerical experiments validate the conclusions. In comparison to existing expert-weighting methods, the proposed optimization model delivers several key advantages and overcomes their inherent limitations. Unlike purely subjective approaches, it requires no self-assessment or pairwise comparisons, thereby eliminating arbitrary bias and uncertainty. Unlike conventional objective methods—such as entropy weighting, which derive weights solely from score dispersion—our model directly minimizes the aggregate deviation between individual evaluations and the group consensus point, ensuring maximal coherence in the final decision. Together with its proven existence, uniqueness, and “perfect rationality,” as well as an efficient nonlinear-equation–based solution algorithm, these features demonstrate the necessity and superiority of our framework for robust, transparent, and computationally tractable expert-weight assignment in MAGDM [31,32,33]. We hope to find a more objective method for determining expert weights, one that has better properties than other methods. To achieve this, we attempt to use optimization-related theories for derivation and verification.
The contributions of this paper are as follows:
  • An optimization model is developed to minimize the discrepancies between individual expert evaluations and the overall group evaluation.
  • The existence and uniqueness of the model’s solution are theoretically proven, assuming full-rank conditions for the overall matrix column, and verified through numerical experiments.
  • The expert weights derived from the optimization model exhibit “perfect rationality”, meaning they are inversely proportional to the distance from the “overall consistent scoring point”. This property is demonstrated through theoretical derivations.
  • A simplified algorithm for solving expert weights is proposed, leveraging the “perfect rationality” principle and transforming the model into a system of nonlinear equations.
The structure of the paper is as follows: Section 2 introduces the problem to be solved. Section 3 presents the optimization model for determining expert weights based on overall consistency in expert evaluations. Section 4 analyzes the model, proves the uniqueness of the solution, demonstrates “perfect rationality”, and outlines a simplified solution algorithm. Section 5 presents numerical experiments that validate the proposed methods.

2. Problem Description

Determining expert weights objectively is a central issue in current academic research. Existing objective weighting methods can be grouped into three main approaches as follows: (1) using experts to determine the proportion of the judgment matrix scale to assign expert weights, assessing the importance, truthfulness, and credibility of the information provided by each expert, and then determining their weight [26,34]; (2) employing the eigenvectors of the judgment matrix to evaluate the contribution of the information provided by each expert in order to calculate their weight [35,36]; and (3) using the consistency ratio of the judgment matrix to assess the quality of the judgmental information provided by the experts and determine their corresponding weights [37,38,39].
In response to this, this paper develops an optimization model to determine expert weights based on the evaluation results of all experts involved in decision making. The objective is to minimize the sum of differences between individual evaluations and the overall consistent evaluation. For multi-attribute group decision making (MAGDM) involving multiple alternatives, the following sets are commonly used:
The set of criteria (attributes):
  • The set of criteria (attributes): C = { c 1 , c 2 , , c n } , where n represents the number of criteria associated with the alternatives being evaluated.
  • The set of alternatives: A = { a 1 , a 2 , , a s } , where s represents the number of alternatives under evaluation.
  • The set of experts (decision makers): D M = { D M 1 , D M 2 , , D M m } , where m represents the number of experts involved in the evaluation.
In practice, multiple experts are often tasked with scoring alternatives from different perspectives (i.e., based on various criteria) and making decisions based on these scores. Given the variability in their qualifications, experience, and expertise, not all experts hold equal authority in the decision-making process. This highlights the necessity of developing a robust method to determine expert weights. It can be argued that the evaluations of experts with more experience, specialized knowledge, and seniority carry more weight in decision making. In other words, experts whose evaluations are more authoritative should be assigned higher weights. In this paper, expert importance is measured by the difference between individual evaluations and the comprehensive group evaluation. The smaller this difference, the closer the expert’s evaluation is to the group’s consistent evaluation, indicating a higher level of agreement among experts. Based on this concept, we propose an optimization model for determining expert weights [40].

3. The Optimization Models for Expert Weight Determination

Assume that expert D M j assigns a score of u i j k to the k-th criteria c k of alternative a i . The vector of scores provided by expert D M j for alternative a i can be represented as u ij = u i j 1 u i j 2 u i j n T , where i = 1 , 2 , , s , j = 1 , 2 , , m , and k = 1 , 2 , , n . For alternative a i , the overall score is given by the matrix U i , as follows:
U i = u i 1 u i 2 u im = u i 1 1 u i m 1 u i 1 n u i m n .
For matrix U i , the k-th row represents the evaluation scores from all experts for the k-th criteria of alternative a i , and the j-th column represents the evaluation scores from expert D M j for all criteria of alternative a i . The expert weight vector is denoted as w = w 1 w 2 w m T , which needs to be determined.
The core idea of this paper is to develop an optimization model that uses the evaluation results of experts on multiple attributes of each alternative to objectively determine their respective weights. It is important to note that the term “experts” here is generalized as follows: all decision makers (DMs) involved in the process are considered “experts” in this context. In practical group decision making (GDM), decision makers often include individuals with varied expertise, and multiple groups may contribute their suggestions.
For alternative a i , there exists an ideal scoring result, which is obtained by linearly weighting the evaluation results from all m experts. This result reflects the consensus of the expert group and is referred to as the “consistent scoring point” of alternative a i , denoted as b i . This consistent scoring point is an n-dimensional vector expressed as follows:
b i = j = 1 m w j u ij = U i · w .
The distance between an individual expert’s scoring vector u ij and the consistent scoring point b i is given by d i j = u ij b i 2 , which measures the degree to which expert D M j ’s evaluation aligns with the expert group’s evaluation. A larger value of d i j indicates a greater difference between the expert’s evaluation and the group’s consensus, suggesting lower authority for expert D M j in the group decision-making process. As a result, experts with larger d i j values should be assigned lower weights in order to improve the consistency of group decision making. The total discrepancy for alternative u i , denoted as D i , is the sum of all individual distances expressed as follows:
D i = j = 1 m d i j = j = 1 m u ij b i 2 .
To make full use of the evaluation information from all experts for each alternative, the data corresponding to all alternatives can be integrated into a chunked matrix. This matrix allows for the determination of an overall “consistent scoring point”, which serves as the basis for constructing an integrated optimization model to derive a unique set of expert weights. Let S denote this integrated matrix as follows (Figure 1):
S = U 1 U 2 U s = u 11 u 12 u 1 m u 21 u 22 u 2 m u s 1 u s 2 u sm n s × m p 1 p 2 p m .
Here, each column vector p j represents the scores assigned by expert D M j to the criteria of all alternatives. The “overall consistent scoring point” b is the n s -dimensional column vector consisting of the consistent scoring points b i for all alternatives, and is given by:
b = b 1 b 2 b s n s × 1 = j = 1 m w j p j = S · w .
The consensus evaluation of decision makers under different options can only reflect the overall opinion under the alternative. However, if we want to rank or select the best among these options, we need to integrate these consensus evaluation results to unify the final decision results. We call it “overall consistent scoring point”, which represents the consensus evaluation conclusion of the decision group during the entire decision-making process.
The distance between the evaluation scores of expert D M j and the overall consistent scoring point b is given by s j = p j b 2 . Based on this, the optimization problem is formulated as follows:
min w Q ( w ) = j = 1 m s j = j = 1 m p j b 2 s . t . j = 1 m w j = 1 , 0 w j 1 , j = 1 , 2 , , m .
Here, Q ( w ) represents the objective function, which aims to minimize the sum of the distances between the individual expert evaluations and the overall consistent scoring point, thereby determining the expert weights that maximize the consistency of the group decision. Minimizing this objective ensures that the final expert weights reflect the most consistent evaluations.
Remark 1.
The model adopts the sum of distances rather than the sum of squared distances as its objective function—in other words, it uses the L 2 norm instead of the L 1 norm. When the objective is instead formulated with the latter, the optimization problem admits a unique local optimum, which occurs precisely at equal weighting. This result can be established by a straightforward proof. In other words, regardless of how large or small the discrepancies among decision--makers’ evaluations may be, an L 1 -norm-based model will always assign equal weights to all experts. Consequently, it fails to capture the influence of inter-expert consistency on the resulting weights and thus cannot serve its intended purpose of deriving meaningful expert weights.
Remark 2.
The objective of the model is to determine expert weights based on their evaluations, with the aim of achieving consistency across the evaluations of multiple decision makers, where greater consistency is preferred. To this end, we have developed the optimization model described above, where the objective function represents the sum of the discrepancies between each decision maker’s evaluation and the overall consistency evaluation. By minimizing this objective function, we obtain the optimal expert weights.
On the one hand, individual evaluations should be consistent. If the differences between expert evaluations are large, the final comprehensive evaluation may be biased. Thus, we aim to minimize the distance between each expert’s evaluation and the “overall consistent scoring point” to increase the consistency of group decision making. Experts whose evaluations are closer to the overall evaluation should receive higher weights, as their evaluations are considered more informative and authoritative. On the other hand, since all experts are assumed to be specialized, it is important that the deviation between the integrated evaluation and each expert’s evaluation is not excessively large. The expert weights obtained through this optimization model ensure the highest consistency between the integrated decision and the individual expert evaluations, while also maintaining the professionalism and credibility of the final evaluation.
In contrast to existing methods, which often rely on judgment matrices that require experts to subjectively measure the importance of evaluation criteria, this approach directly uses expert evaluations, making it more feasible and easier to implement. This optimization model is more aligned with practical decision-making scenarios and thus has both theoretical and practical value.
Remark 3.
Building on the idea, the magnitude of the expert weights should be positively correlated with the consistency between the decision maker’s evaluation and the overall evaluation. In other words, the larger the discrepancy between the expert’s evaluation and the overall evaluation, the smaller the weight assigned to that expert, indicating a negative correlation. Moreover, in subsequent sections, we will theoretically establish this negative correlation and even demonstrate a strict inverse relationship, thereby confirming the fundamental purpose of our model.
The method of solving expert weights can be organized into the following algorithmic form (Algorithm 1):
Algorithm 1 Algorithm for solving expert weights
1:
Each decision maker evaluates each criteria under each plan to obtain u i j k
2:
Obtain the evaluation matrix U i of the decision maker under each alternative plan
3:
Concatenate the matrices to obtain the integrated matrix S, thereby obtaining the evaluation column vector p i of each decision maker
4:
Calculate the expression of “overall consistent scoring point” b
5:
Use the SLSQP algorithm to solve the optimization model (6)
6:
return Expert weight vector w
SLSQP is a member of the sequential quadratic programming (SQP) family. Each iteration typically involves constructing a quadratic subproblem, solving the QP, performing a line search or trust region step, and updating the Hessian approximation, often using BFGS. The dominant computational cost of the SLSQP method arises from solving the quadratic programming (QP) subproblem in each iteration, primarily due to matrix decomposition. The per-iteration time complexity is O ( n 3 + m n ) , leading to a total time complexity of O ( K n 3 ) , where K is the number of iterations. The space complexity is O ( n 2 + m n ) .

4. Analysis of the Mathematical Properties of the Model

In this section, we introduce three key features of the model. First, based on the theory of partially convex optimization, we prove the uniqueness of the model’s solution. Second, by deriving the Karush–Kuhn–Tucker (KKT) conditions for the optimization problem, we demonstrate that the expert weights are inversely proportional to the distances from the “overall consistent scoring point”, thereby confirming the “perfect rationality” of the model. Finally, leveraging this inverse relationship, we reformulate the solution problem into a system of nonlinear equations, which improves the computational efficiency.

4.1. Existence and Uniqueness of the Solution

We prove the existence and uniqueness of the solution to the optimization problem (6), based on the theory of partially convex optimization.
Definition 1
(Convex Function [41]). Let S R n be a non-empty convex set, and let f be a function defined on S. The function f is called convex on S if, for any x 1 S , x 2 S , λ ( 0 , 1 ) , the following inequality holds:
f ( λ x 1 + ( 1 λ ) x 2 ) λ f ( x 1 ) + ( 1 λ ) f ( x 2 ) .
Additionally, f is called strictly convex on S if the strict inequality holds when x 1 x 2 .
Definition 2
(Convex Optimization [41]). The constrained problem
min f ( x ) , x R n s . t . c s ( x ) 0 , s = 1 , 2 , , k , d t ( x ) = 0 , t = 1 , 2 , , l ,
is called a convex optimization problem if both the objective function f ( x ) and the constraint functions c s ( x ) ( s = 1 , 2 , , k ) are convex and the constraint functions d t ( x ) ( t = 1 , 2 , , l ) are linear.
Lemma 1
(Existence of Convex Optimization Solutions [42]). Consider the convex optimization problem (8), and let D be the feasible region of the problem, defined as follows:
D = { x | c s ( x ) 0 , s = 1 , 2 , , k ; d t ( x ) = 0 , t = 1 , 2 , , l ; x R n } .
If D is a non-empty compact set and f is continuous on D, then an optimal solution x * exists.
Lemma 2
(Uniqueness of Convex Optimization Solutions [43]). Consider the convex optimization problem (8), and let D be the feasible region of the problem. Then,
1. 
If the problem has a local optimal solution x * , then x * is also the global optimal solution;
2. 
The set of global optimal solutions is convex;
3. 
If the problem has a local optimal solution x * and f ( x ) is strictly convex on D, then x * is the unique global optimal solution.
Theorem 1.
The objective function Q ( w ) is strictly convex when the columns of the matrix S are of full rank.
Proof. 
We define the objective function as follows: Q ( w ) = j = 1 m p j b 2 = j = 1 m p j i = 1 m w i p i 2 . Let w and w be two distinct weight vectors that satisfy the constraints, with w w and λ ( 0 , 1 ) . We then consider the following:
Q ( λ w + ( 1 λ ) w ) = j = 1 m p j i = 1 m [ λ w i + ( 1 λ ) w i ] p i 2 .
Next, we express the combination of Q ( w ) and Q ( w ) as follows:
λ Q ( w ) + ( 1 λ ) Q ( w ) = λ j = 1 m p j i = 1 m w i p i 2 + ( 1 λ ) j = 1 m p j i = 1 m w i p i 2 .
Let F j and H j be defined as follows: F j = λ p j i = 1 m w i p i 2 + ( 1 λ ) p j i = 1 m w i p i 2 and H j = p j i = 1 m [ λ w i + ( 1 λ ) w i ] p i 2 . Now, define the function T ( λ ; w , w ) as follows: T ( λ ; w , w ) = λ Q ( w ) + ( 1 λ ) Q ( w ) Q ( λ w + ( 1 λ ) w ) .
This yields the following:
T ( λ ; w , w ) = λ Q ( w ) + ( 1 λ ) Q ( w ) Q ( λ w + ( 1 λ ) w ) = λ j = 1 m p j i = 1 m w i p i 2 + ( 1 λ ) j = 1 m p j i = 1 m w i p i 2 j = 1 m p j i = 1 m [ λ w i + ( 1 λ ) w i ] p i 2 = j = 1 m { λ p j i = 1 m w i p i 2 + ( 1 λ ) p j i = 1 m w i p i 2 p j i = 1 m [ λ w i + ( 1 λ ) w i ] p i 2 } .
Thus, we obtain the following: T ( λ ; w , w ) = j = 1 m ( F j H j ) .
Since λ ( 0 , 1 ) is positive and the L2-norm of any vector is non-negative, both F j and H j must be non-negative. Squaring both terms and taking the difference yields the following:
F j 2 H j 2 = [ λ p j i = 1 m w i p i 2 + ( 1 λ ) p j i = 1 m w i p i 2 ] 2 p j i = 1 m [ λ w i + ( 1 λ ) w i ] p i 2 2
Expanding both terms yields the following:
F j 2 H j 2 = λ 2 p j i = 1 m w i p i 2 2 + ( 1 λ ) 2 p j i = 1 m w i p i 2 2 + 2 λ ( 1 λ ) p j i = 1 m w i p i 2 p j i = 1 m w i p i 2 λ ( p j i = 1 m w i p i ) + ( 1 λ ) ( p j i = 1 m w i p i ) 2 2 = λ 2 p j i = 1 m w i p i 2 2 + ( 1 λ ) 2 p j i = 1 m w i p i 2 2 + 2 λ ( 1 λ ) p j i = 1 m w i p i 2 p j i = 1 m w i p i 2 λ 2 p j i = 1 m w i p i 2 2 ( 1 λ ) 2 p j i = 1 m w i p i 2 2 2 λ ( 1 λ ) ( p j i = 1 m w i p i ) T ( p j i = 1 m w i p i )
This simplifies to the following:
F j 2 H j 2 = 2 λ ( 1 λ ) [ p j i = 1 m w i p i 2 p j i = 1 m w i p i 2 ( p j i = 1 m w i p i ) T ( p j i = 1 m w i p i ) ] .
Since λ ( 0 , 1 ) , it follows that λ ( 1 λ ) is a positive real number. By the Cauchy–Schwarz inequality, the following can be derived:
x T y = | ( x , y ) | x 2 y 2 ,
where the equality holds if and only if x and y are linearly related. From this, we deduce the following:
p j i = 1 m w i p i 2 p j i = 1 m w i p i 2 ( p j i = 1 m w i p i ) T ( p j i = 1 m w i p i ) ,
Thus, it follows that F j 2 H j 2 0 holds. Since both F j and H j are non-negative, we conclude that F j H j , which implies T ( λ ; w , w ) 0 , and therefore, the objective function Q ( w ) is convex.
Since w w , for the inequality in (12) to hold strictly, F j H j = 0 cannot hold simultaneously for all j = 1 , 2 , , m . To prove this, we proceed by contradiction. Suppose that F j = H j for all j. Then, for each j, the vector p j i = 1 m w i p i must be linearly related to the vector p j i = 1 m w i p i . This implies that for some scalar k j , we have the following:
p j i = 1 m w i p i = k j ( p j i = 1 m w i p i ) , j .
Since the matrix columns are assumed to be of full rank, meaning that the column vectors are linearly independent, the coefficients of the vector p j must be zero. This leads to the following system of equations:
w i = k j w i , i j , 1 w i = k j ( 1 w i ) , i = j .
For j = 1 , 2 , , m , i j , we obtain the following:
w i = k 1 w i , i 1 , w i = k 2 w i , i 2 , w i = k m w i , i m .
Additionally, for i = j = 1 , 2 , , m , we have the following:
1 w 1 = k 1 ( 1 w 1 ) , 1 w 2 = k 2 ( 1 w 2 ) , 1 w m = k m ( 1 w m ) .
From the system in (20), we deduce that k 1 = k 2 = = k m C and w i = C w i . Substituting this into (21) yields C = 1 , which implies that: p j i = 1 m w i p i = p j i = 1 m w i p i , and consequently, w i = w i for all i = 1 , 2 , , m , which contradicts the assumption that w w . Therefore, the assumption that F j = H j for all j is false.
Thus, F j H j = 0 cannot hold simultaneously, implying that the inequality in (12) is strict. Consequently, we conclude that the objective function Q ( w ) is strictly convex when the matrix columns are of full rank. □
Theorem 2.
If the columns of S are of full rank, the solution to the optimization problem (6) exists and is unique.
Proof. 
To facilitate analysis, we rewrite (6) in the standard form as follows:
min w Q ( w ) = j = 1 m p j b 2 s . t . w j 1 0 , j = 1 , 2 , , m , w j 0 , j = 1 , 2 , , m , j = 1 m w j 1 = 0 ,
Here, the functions w j 1 and w j are convex, and the equality constraint j = 1 m w j 1 = 0 is linear. Since Q ( w ) is convex, by Definition 2, (6) is a convex optimization problem. Therefore, by Lemma 1, we conclude that a local optimal solution exists.
From Theorem 1, we know that when the matrix S has full rank, Q ( w ) is not only convex but also strictly convex. Thus, by Lemma 2, the local optimal solution is also the global optimal solution. Therefore, the solution exists and is unique, completing the proof. □
Remark 4.
In many real-world problems, additional requirements or constraints often arise. To accommodate these conditions, supplementary constraints can be incorporated into the base model. For instance, in practical applications, if we know that expert a is more qualified than expert b for evaluating the alternative, we can impose a new constraint w a > w b in Equation (6) to ensure that the optimization problem respects these requirements. The addition of such constraints is flexible, provided they ensure the convexity of (6), thus guaranteeing the uniqueness of the solution and the appropriate expert weights for the decision model.
The matrix S has n s rows and m columns, where n is the number of criteria and s is the number of alternatives. In practice, we aim to increase the number of decision makers (DMs) while maximizing the number of alternatives and evaluation criteria. This typically results in a matrix S with more rows than columns, increasing the likelihood that the columns of S are of full rank.

4.2. The “Perfect Rationality” of Model

When constructing the model, we aim to ensure that the smaller the distance between an expert’s evaluation and the “overall consistent scoring point”, the greater the weight assigned to the expert. In other words, expert weights are negatively correlated with the distance, and the ideal situation is that the expert’s weight is strictly inversely proportional to the corresponding distance. Based on this negative correlation between weights and distances, we propose the following definition:
Definition 3
(Deviation). Let the product of the weight w j of expert D M j and the corresponding distance s j be denoted by x j = w j · s j . The deviation of the weight-distance product for expert D M j is defined as e j = | x j x ¯ | , where x ¯ = 1 m j = 1 m x j is the average of the weight-distance products of all experts, and m is the total number of experts involved in the decision-making process. Thus, the average relative deviation D ¯ ( w ) is defined as the “deviation” of the model when the weights are w , as follows:
D ¯ ( w ) = 1 m j = 1 m e j x ¯ = 1 m j = 1 m | x j x ¯ | x ¯ .
Definition 4
(Perfect Rationality). The model is considered to exhibit “perfect rationality” under the expert weights when the “deviation” is zero.
Remark 5.
Based on the two definitions above, we can conclude that the model has “perfect rationality” if and only if the expert weights are strictly inversely related to the distances corresponding to the “overall consistent scoring point”.It is evident that D ¯ ( w ) = 0 implies x i x ¯ = 0 , i , which leads to x i = C , i . This in turn suggests that w j · s j = C , i , indicating that the weights are inversely proportional to the distance. The converse is also true.
Lemma 3
(Sufficient and Necessary Conditions for Optimality in Convex Optimization [41]). If the objective function f ( x ) and the constraint functions c s ( x ) ( s = 1 , 2 , , k ) and d t ( x ) ( t = 1 , 2 , , l ) are both differentiable and the optimization problem satisfies Slater’s condition, then the Karush–Kuhn–Tucker (KKT) conditions provide a sufficient and necessary condition for optimality. Slater’s condition ensures that the optimal duality gap is zero and that the dual optimum is attained. Thus, x is optimal if and only if both ( λ , ν ) and x satisfy the KKT conditions.
The optimal expert weights obtained from the optimization model (6) are denoted as w * , where w j * is the optimal weight of expert D M j , and s j * is the distance from the evaluation point of expert D M j to the “overall consistent scoring point” b * = j = 1 m w j * s j * . At this point, s j * = p j b * 2 .
Theorem 3.
The optimal expert weights obtained from the optimization model (6) exhibit “perfect rationality”.
Proof. 
The standard form of the optimization model (6) is written as (22), and by the definition of the KKT conditions [41], the KKT conditions for this optimization model are expressed as follows:
Q ( w * ) + j = 1 m λ j * ( w j * 1 ) + j = 1 m μ j * ( w j * ) + ν * ( j = 1 m w j * 1 ) = 0 , j = 1 m w j * 1 = 0 , λ j * ( w j * 1 ) = 0 , j = 1 , , m , λ j * 0 , j = 1 , , m , μ j * ( w j * ) = 0 , j = 1 , , m , μ j * 0 , j = 1 , , m , w j * 1 0 , j = 1 , , m , w j * 0 , j = 1 , , m .
From the last six equations, we conclude that λ j * = μ j * = 0 , j , so the first equation simplifies to the following:
Q ( w * ) w k * + ν * = 0 , k .
The partial derivative of p j b * 2 is as follows:
p j b * 2 w k * = p k T ( p j b * ) p j b * 2 .
Thus, we can rewrite the gradient of Q ( w * ) as follows:
Q ( w * ) w k * = p k T ( j = 1 m p j b * p j b * 2 ) ,
Substituting this into (25), we get the following:
p k T ( j = 1 m p j b * p j b * 2 ) = ν * , k .
The above equation holds for any k. Thus, for q , r { 1 , 2 , , m } , we have the following:
( p q p r ) T ( j = 1 m p j b * p j b * 2 ) = 0 .
Assuming q r , it follows that p q p r , and thus, the following holds: j = 1 m p j b * p j b * 2 = 0 .
Since b * = j = 1 m w j * p j and the evaluation vectors of each expert are linearly independent, the coefficients corresponding to the evaluation vectors of the experts in the above sum must be zero. For expert D M j , whose evaluation vector is p j , the corresponding equation is as follows:
w j * 1 p j b * 2 + k j w j * p k b * 2 = ( w j * k = 1 m 1 p k b * 2 ) 1 p j b * 2 = 0 .
Substituting p j b * 2 , we rewrite Equation (30) as follows:
w j * k = 1 m 1 s k * = 1 s j * ,
which simplifies to the following:
w j * · s j * = 1 k = 1 m 1 s k * .
Equation (32) holds for any j, and the right-hand side is constant when the optimal weights are obtained. Thus, we conclude that the product of the optimal weights and their corresponding distances to the “overall consistent scoring point” is constant. This implies that the optimal weights are strictly inversely proportional to the distances at this point, so e j = 0 and D ¯ = 0 . Therefore, the model exhibits “perfect rationality”. □

4.3. Solving Algorithms Based on “Perfect Rationality”

The expert weights can be determined by solving the optimization problem to obtain its optimal solution. From Theorem 3, we know that when the optimization problem reaches its optimal solution, the expert weights are inversely proportional to the distances. Since it is sufficient for the optimization problem to reach the optimal solution and for the solution to satisfy the Karush–Kuhn–Tucker (KKT) conditions, we can transform the optimization model. The transformed solution is equivalent to the original optimization model’s solution. Based on this approach, we propose the following system of nonlinear equations:
x 1 · s 1 · k = 1 m 1 s k = 1 , x 2 · s 2 · k = 1 m 1 s k = 1 , x m · s m · k = 1 m 1 s k = 1 .
where s j = p j k = 1 m x k p k 2 . The optimal weights, w * = w 1 * w 2 * w m * T , are the solutions to this system of equations.
Since calculating k = 1 m 1 s k in the above system is computationally complex, we can further transform the system to improve efficiency. The equivalent system of nonlinear equations is as follows:
x 1 · s 1 = x 2 · s 2 , x 2 · s 2 = x 3 · s 3 , x m 1 · s m 1 = x m · s m , x m · s m · k = 1 m 1 s k = 1 .
By solving this system of nonlinear equations, we can obtain the same expert weights as those obtained by solving the optimization model. This result will be further verified through numerical experiments.

5. Numerical Experiment

To validate the two models discussed above, we generated multiple sets of data and performed numerical experiments to observe regular patterns in the results. In the following section, we introduce and analyze the proposed models using the problem represented by the dataset in Table 1 as an example.
We select seven experts C = { D M 1 , D M 2 , , D M 7 } from six evaluation criteria C = { c 1 , c 2 , , c 6 } and five alternatives A = { a 1 , a 2 , , a 5 } for scoring. Based on these data, decisions are made. The evaluation matrix U i for each alternative is constructed, and the data presented in Table 1 are used to assemble the matrix S. From this, the evaluation vector p j for each expert is derived. The weight vector for the experts is represented as w = w 1 w 2 w 3 w 4 w 5 w 6 w 7 T . All numerical experiments were performed on a laptop equipped with an Intel(R) Core(TM) i5-8265U CPU @ 1.60 GHz, 1800 Mhz, 4 cores, and 8 logical processors, the operating system was Windows 11, and the calculation process used MATLAB R2019a. In solving the optimization problem, the initial weight vector is set as the mean weight. The optimal expert weights for models (6) and (34) are then computed using the SLSQP and trust region methods [41], respectively. In solving the nonlinear equation system, we set the initial weight to the equal weight and the threshold to 1 × 10−12. After 3 iterations using Matlab, we can obtain the expert weight. The results are presented in Table 2 and Table 3.
First, from the results in Table 2, it is evident that the optimization model yields the optimal expert weights with a “deviation” of only 1.88 × 10 6 . At this point, the distance between the expert weights and the corresponding “overall consistent scoring point” is approximately 66.5948. This suggests an inverse relationship between the weights and the distance, implying that the model, with its optimal expert weights, demonstrates “perfect rationality”. The distance between the optimal expert weights and their corresponding distances is visualized in Figure 2. Additionally, when comparing the results under the optimal weights with those obtained from equally distributed expert weights, the objective function value and “deviation” are smaller under the optimal weights. This indicates that the optimization model’s determination of expert weights is rational and justified.
Secondly, the results of solving expert weights using the optimization model and the nonlinear system of equations model are presented in Table 2 and Table 3, respectively. The optimal expert weights obtained by both models are identical, given by w * = 0.1808 0.1366 0.1304 0.1184 0.1526 0.1327 0.1485 T . From the results in both tables, it is clear that the optimization model requires 10 iterations to solve for the expert weights, while the nonlinear system of equations model requires only 3 iterations to obtain the ideal weights. This demonstrates that transforming the problem into a system of nonlinear equations reduces the number of iterations and improves computational efficiency, highlighting the feasibility of using the nonlinear system to solve for expert weights as the solution approach for the targeted model.
To assess the effectiveness of our proposed weighting model, we applied the entropy weight method (EWM) to the same dataset (Table 1) and compared its expert weights with those obtained from our optimization model (Table 2 and Table 3). We also computed the deviation values under the entropy-based weights to facilitate a direct comparison.
The entropy method assigns higher weights to experts whose evaluation scores exhibit greater dispersion (and therefore more information). This principle aligns with the rationale of our optimization model, making it a suitable benchmark. Specifically, each column of the evaluation matrix is normalized to [ 0 , 1 ] , and then entropy and dispersion are calculated; the normalized dispersion scores yield the entropy-based weights. We then computed the corresponding deviation metric for these weights and present the results in Table 4.
As shown, although the two methods produce different weight magnitudes, the overall ranking of decision makers remains largely unchanged. Crucially, the deviation values under our optimization model are substantially lower than those from the entropy method. This demonstrates that the expert weights derived from our model achieve a higher level of consensus in the decision-making process.
To further validate the conclusion that expert weights are inversely proportional to the corresponding distances, we randomly generated 100 datasets and analyzed their “deviation”. The number of alternatives and criteria in each decision group were randomly selected from the interval [ 5 , 25 ] . Similarly, the number of experts was randomly chosen within this range, subject to the condition that it does not exceed the product of the number of alternatives and criteria, ensuring that the matrix columns are full rank and the expert weights exist and are unique. These 100 datasets represent abstractions of distinct decision problems, each characterized by different numbers of decision makers, alternatives, and evaluation criteria (see Table 5). We performed a statistical analysis of the expert weights computed for each case. The results provide evidence of both the validity and generalizability of our conclusions. Moreover, the heterogeneous levels of agreement among decision makers across these problems capture the range of consensus and conflict scenarios that can arise in practice. Such variability can significantly influence a model’s effectiveness and fairness in real-world decision making. By examining outcomes over all 100 cases, we further demonstrate the robustness and broad applicability of the proposed weight-determination method.
Under these conditions, we generated 100 datasets and conducted numerical experiments to compute the corresponding expert weights. We then calculated the distance to the “overall consistent scoring point” and the product of the two. The “deviation” for each dataset is summarized in Table 6, and the frequency distributions for the optimal expert weights and mean weights are shown in Figure 3 and Figure 4. As illustrated in Figure 3, the “deviation” of the decision is relatively small across multiple experiments, suggesting that, accounting for the inevitable errors in numerical experiments, expert weights are inversely proportional to the distances. This confirms that the weights obtained by the model exhibit “perfect rationality”.

6. Summary

Multi-Attribute Group Decision Making (MAGDM) is a key area within modern decision science, with widespread applications across various fields. A major challenge in MAGDM is the objective determination of expert weights. The contribution of this paper lies in proposing a novel approach for determining expert weights by establishing an optimization model. We first assess the consistency of expert evaluations for each alternative with respect to the “overall consistent scoring point”. Then, we use the expert scores and the objective function’s distance to reflect the overall differences in evaluating alternatives. The solution to the optimization problem yields the ideal expert weights.
The expert weights obtained through this model exhibit three key characteristics. First, based on convex optimization theory, we prove the uniqueness and existence of the solution to the optimization model, ensuring that the expert weights are globally optimal. Second, the distance between an expert’s evaluation result and the “overall consistent scoring point” is inversely proportional to the weight assigned to the expert. This relationship demonstrates that the model’s expert weights possess “perfect rationality”. Finally, leveraging this “perfect rationality”, we transform the optimization problem into a system of nonlinear equations, which reduces the number of iterations and improves the solution efficiency when compared to the optimization model. These conclusions are validated through numerical experiments. Furthermore, in practical applications, users can adjust the constraints based on the specific context, enhancing the model’s adaptability and expressiveness—an important advantage of the proposed approach.

Author Contributions

Conceptualization, C.H.; methodology, C.H. and S.Z.; software, Y.L. and Q.H.; validation, Y.L., Q.H., C.H., and S.Z.; formal analysis, C.H.; investigation, Y.L. and Q.H.; resources, C.H.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L., C.H., S.Z., and Q.H.; visualization, Y.L.; supervision, C.H. and S.Z.; project administration, C.H.; funding acquisition, C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the “Key Research Program” of the Ministry of Science and Technology, People’s Republic of China OF FUNDER grant No. 2022YFC3801300.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to express their gratitude to Han Huilei and Huang Li for their valuable assistance.

Conflicts of Interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

References

  1. Saaty, T.L. A Scaling Method for Priorities in Hierarchical Structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  2. Darko, A.; Chan, A.P.C.; Ameyaw, E.E.; Owusu, E.K.; Pärn, E.; John Edwards, D. Review of application of analytic hierarchy process (AHP) in construction. Int. J. Constr. Manag. 2019, 19, 436–452. [Google Scholar] [CrossRef]
  3. Liu, Y.; Eckert, C.M.; Earl, C. A review of fuzzy AHP methods for decision-making with subjective judgements. Expert Syst. Appl. 2020, 161, 113738. [Google Scholar] [CrossRef]
  4. Saaty, T.L. Decision Making with Dependence and Feedback: The Analytic Network Process: The Organization and Prioritization of Complexity; Rws Publications: Pittsburgh, PA, USA, 2001. [Google Scholar]
  5. Hwang, C.L.; Lai, Y.J.; Liu, T.Y. A new approach for multiple objective decision making. Comput. Oper. Res. 1993, 20, 889–899. [Google Scholar] [CrossRef]
  6. Chakraborty, S. TOPSIS and Modified TOPSIS: A comparative analysis. Decis. Anal. J. 2022, 2, 100021. [Google Scholar] [CrossRef]
  7. Çelikbilek, Y.; Tüysüz, F. An in-depth review of theory of the TOPSIS method: An experimental analysis. J. Manag. Anal. 2020, 7, 281–300. [Google Scholar] [CrossRef]
  8. Chen, P. Effects of the entropy weight on TOPSIS. Expert Syst. Appl. 2021, 168, 114186. [Google Scholar] [CrossRef]
  9. Wang, Y.; Liu, P.; Yao, Y. BMW-TOPSIS: A generalized TOPSIS model based on three-way decision. Inf. Sci. 2022, 607, 799–818. [Google Scholar] [CrossRef]
  10. Roy, B. The outranking approach and the foundations of electre methods. Theory Decis. 1991, 31, 49–73. [Google Scholar] [CrossRef]
  11. Figueira, J.; Mousseau, V.; Roy, B. Electre Methods. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: New York, NY, USA, 2005; pp. 133–153. [Google Scholar] [CrossRef]
  12. Qu, G.; Zhang, Z.; Qu, W.; Xu, Z. Green Supplier Selection Based on Green Practices Evaluated Using Fuzzy Approaches of TOPSIS and ELECTRE with a Case Study in a Chinese Internet Company. Int. J. Environ. Res. Public Health 2020, 17, 3268. [Google Scholar] [CrossRef] [PubMed]
  13. Opricovic, S.; Tzeng, G.H. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
  14. Gul, M.; Ak, M.F.; Guneri, A.F. Pythagorean fuzzy VIKOR-based approach for safety risk assessment in mine industry. J. Saf. Res. 2019, 69, 135–153. [Google Scholar] [CrossRef] [PubMed]
  15. Brans, J.P.; Vincke, P. Note-A Preference Ranking Organisation Method: The PROMETHEE Method for Multiple Criteria Decision-Making. Manag. Sci. 1985, 31, 647–656. [Google Scholar] [CrossRef]
  16. Tong, L.Z.; Wang, J.; Pu, Z. Sustainable supplier selection for SMEs based on an extended PROMETHEE approach. J. Clean. Prod. 2022, 330, 129830. [Google Scholar] [CrossRef]
  17. Abdullah, L.; Chan, W.; Afshari, A. Application of PROMETHEE method for green supplier selection: A comparative result based on preference functions. J. Ind. Eng. Int. 2019, 15, 271–285. [Google Scholar] [CrossRef]
  18. Mohammadi, M.; Rezaei, J. Bayesian best-worst method: A probabilistic group decision making model. Omega 2020, 96, 102075. [Google Scholar] [CrossRef]
  19. Tang, M.; Liao, H. From conventional group decision making to large-scale group decision making: What are the challenges and how to meet them in big data era? A state-of-the-art survey. Omega 2021, 100, 102141. [Google Scholar] [CrossRef]
  20. Tindale, R.S.; Winget, J.R. Group Decision-Making. In Oxford Research Encyclopedia of Psychology; Oxford Research Encyclopedia: Oxford, UK, 2019. [Google Scholar] [CrossRef]
  21. Triantaphyllou, E.; Shu, B.; Sanchez, S.N.; Ray, T.G. Multi-Criteria Decision Making: An Operations Research Approach. In Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons: New York, NY, USA, 1998. [Google Scholar]
  22. Liu, J.N.; Zhuang, W.M. Evaluation of construction project mission statement based on architectural planning theory—An attempt of contemporary evaluation system in the context of big data. Des. Community 2016, 90–93. [Google Scholar]
  23. Alemi-Ardakani, M.; Milani, A.S.; Yannacopoulos, S.; Shokouhi, G. On the effect of subjective, objective and combinative weighting in multiple criteria decision making: A case study on impact optimization of composites. Expert Syst. Appl. 2016, 46, 426–438. [Google Scholar] [CrossRef]
  24. Xu, D.; Wang, S.Y.; Gan, Z.G.; Bai, Y.G. Risk analysis of sudden water pollution accidents based on improved hierarchical analysis. Water Resour. Hydropower Eng. 2020, 51, 159–166. [Google Scholar] [CrossRef]
  25. Azadfallah, M. The Extraction of Expert Weights from Pair Wise Comparisons in Delphi Method. J. Arab. Islam. Stud. 2015, 3. [Google Scholar] [CrossRef]
  26. Koksalmis, E.; Kabak, Ö. Deriving decision makers’ weights in group decision making: An overview of objective methods. Inf. Fusion 2019, 49, 146–160. [Google Scholar] [CrossRef]
  27. Lin, Y.; Zhang, R.J.; Wu, H.S. Expert weight determination method based on hesitancy and similarity and its application. Control Decis. 2021, 36, 1482–1488. [Google Scholar] [CrossRef]
  28. Wei, K.J.; Geng, J.B.; Xu, S.Q. An FMEA approach based on fuzzy theory and D-S evidence theory. Syst. Eng. Electron. 2019, 41, 2662–2668. [Google Scholar]
  29. He, D.Y.; Chen, X.L.; Xu, J.Q. Minimum cross-entropy-based weight integration method in multi-attribute group decision problems. Control Decis. 2017, 32, 378–384. [Google Scholar] [CrossRef]
  30. Fu, Z.Y.; Lan, J.Y.; Wang, F. Application of improved fuzzy hierarchical analysis in green packaging evaluation. Packag. Eng. 2021, 42, 230–236. [Google Scholar] [CrossRef]
  31. Bai, C.; Zhu, Q.; Sarkis, J. Circular economy and circularity supplier selection: A fuzzy group decision approach. Int. J. Prod. Res. 2024, 62, 2307–2330. [Google Scholar] [CrossRef]
  32. Xiong, Z.; Lin, Y.; Wang, Q.; Yang, W.; Shen, C.; Zhang, J.; Zhu, K. Research on safety performance evaluation and improvement path of prefabricated building construction based on DEMATEL and NK. Appl. Sci. 2024, 14, 8010. [Google Scholar] [CrossRef]
  33. Chen, Z.; Zhong, P.; Liu, M.; Ma, Q.; Si, G. An integrated expert weight determination method for design concept evaluation. Sci. Rep. 2022, 12, 6358. [Google Scholar] [CrossRef] [PubMed]
  34. Mou, N.Z.; Chang, J.P.; Chen, Z.S. Sustainable supplier selection based on PD-HFLTS with group decision theory. Comput. Integr. Manuf. Syst. 2018, 24, 1261–1278. [Google Scholar] [CrossRef]
  35. Liu, Y.; Yu, M.; Liu, Y. A bus route service quality evaluation model based on passenger perception. J. Northeast. Univ. Sci. 2019, 40, 750–755. [Google Scholar]
  36. Németh, B.; Molnár, A.; Bozóki, S.; Wijaya, K.; Inotai, A.; Campbell, J.D.; Kaló, Z. Comparison of weighting methods used in multicriteria decision analysis frameworks in healthcare with focus on low- and middle-income countries. J. Comp. Eff. Res. 2019, 8, 195–204. [Google Scholar] [CrossRef] [PubMed]
  37. Rezaei, J. Best-worst multi-criteria decision-making method. Omega 2015, 53, 49–57. [Google Scholar] [CrossRef]
  38. Liang, F.; Brunelli, M.; Rezaei, J. Consistency issues in the best worst method: Measurements and thresholds. Omega 2020, 96, 102175. [Google Scholar] [CrossRef]
  39. Pamucar, D.; Ecer, F. Prioritizing the weights of the evaluation criteria under fuzziness: The fuzzy full consistency method–FUCOM-F. Facta Univ. Ser. Mech. Eng. 2020, 18, 419–437. [Google Scholar] [CrossRef]
  40. Liu, Y.; Hu, C.; Zhang, S.; Hu, Q. A New Approach to the Determination of Expert Weights in Multi-attribute Group Decision Making. arXiv 2023, arXiv:2311.12546. [Google Scholar] [CrossRef]
  41. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, MA, USA, 2004. [Google Scholar] [CrossRef]
  42. Bazaraa, M.S. Nonlinear Programming: Theory and Algorithms; Wiley Publishing: Hoboken, NJ, USA, 2013. [Google Scholar]
  43. Chong, E.K.P.; Zak, S.H. An Introduction to Optimization. IEEE Antennas Propag. Mag. 1996, 38, 60. [Google Scholar] [CrossRef]
Figure 1. The process of matrix construction.
Figure 1. The process of matrix construction.
Mathematics 13 02286 g001
Figure 2. Relationship between optimal expert weights and corresponding distances.
Figure 2. Relationship between optimal expert weights and corresponding distances.
Mathematics 13 02286 g002
Figure 3. Histogram of the frequency distribution of the “deviation” of the optimal weights.
Figure 3. Histogram of the frequency distribution of the “deviation” of the optimal weights.
Mathematics 13 02286 g003
Figure 4. Histogram of the frequency distribution of the “deviation” of the mean weights.
Figure 4. Histogram of the frequency distribution of the “deviation” of the mean weights.
Mathematics 13 02286 g004
Table 1. Scores from expert evaluations of each alternative for each criteria.
Table 1. Scores from expert evaluations of each alternative for each criteria.
AlternativeCriteriaDecision Maker
DM 1 A DM 2 A DM 3 A DM 4 A DM 5 A DM 6 A DM 7 A
A 1 C 1 86478081708450
C 2 91564992939357
C 3 94575694789296
C 4 63684099509898
C 5 74848845798899
C 6 53477973448172
A 2 C 1 82527490464348
C 2 80996751888799
C 3 70429478478395
C 4 59699453869862
C 5 68995288719385
C 6 56766987719562
A 3 C 1 60524143496974
C 2 49708546529091
C 3 78669073419096
C 4 72745479616276
C 5 74878769967648
C 6 64874688944993
A 4 C 1 95544144729462
C 2 84436774497768
C 3 58995146578075
C 4 58684988466675
C 5 47445072665442
C 6 48604973554369
A 5 C 1 67657044428869
C 2 86959049736596
C 3 44935971714067
C 4 64655846546959
C 5 41906185496063
C 6 67639579734076
Table 2. Expert weights and distances to the “overall consistent scoring point” under the optimization model.
Table 2. Expert weights and distances to the “overall consistent scoring point” under the optimization model.
Decision Maker Q ( w DM ) “Deviation”
DM 1 DM 2 DM 3 DM 4 DM 5 DM 6 DM 7
Mean WeightWeight0.14290.14290.14290.14290.14290.14290.14293318.7385 0.62
Distance384.8848483.7007506.6619550.7390443.6898501.5781447.4841
Multiplication54.9835569.1000972.3802878.6770063.3842571.6540263.92630
Optimal WeightWeight0.18080.13660.13040.11840.15260.13270.14853315.8226 1.88 × 10 6
Distance368.1900487.5099510.6761562.6274436.4149501.8384448.5659
Multiplication66.5948066.5947866.5948466.5947666.5947666.5947866.59479
Iterations10
Note: For simplicity, expert weights and distances are reported to four decimal places, while the products are displayed to six decimal places. Therefore, slight discrepancies between the product of the weights and distances in the table and the results obtained from the detailed calculations are expected.
Table 3. Expert weights and iterations for modeling systems of nonlinear equations.
Table 3. Expert weights and iterations for modeling systems of nonlinear equations.
Decision Maker
DM 1 DM 2 DM 3 DM 4 DM 5 DM 6 DM 7
Expert weight0.18080.13660.13040.11840.15260.13270.1485
Iterations3
Table 4. Expert weights calculated using the entropy weight method and corresponding “deviation”.
Table 4. Expert weights calculated using the entropy weight method and corresponding “deviation”.
Expert Weight“Deviation”
DM 1 DM 2 DM 3 DM 4 DM 5 DM 6 DM 7
EWM0.14100.14210.13690.14360.13360.15170.15130.0924
Table 5. Values of various factors for the 100 simulated datasets (partial).
Table 5. Values of various factors for the 100 simulated datasets (partial).
NumberNumber of Decision MakersNumber of CriteriaNumber of Alternatives
1112015
291225
3162013
4132416
5141614
6122012
722611
825910
9102017
1020246
1110248
1210165
13171621
14161211
1561013
Table 6. Statistical analysis of “deviation” for 100 datasets.
Table 6. Statistical analysis of “deviation” for 100 datasets.
“Deviation”MaximumMinimumRangeAverageStandard Deviation
Optimal Weight0.000738 7.11589 × 10 7 0.0007370.0001150.000134
Mean Weight1.1974420.0587981.1386440.4521830.261771
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Hu, C.; Zhang, S.; Hu, Q. An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM. Mathematics 2025, 13, 2286. https://doi.org/10.3390/math13142286

AMA Style

Liu Y, Hu C, Zhang S, Hu Q. An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM. Mathematics. 2025; 13(14):2286. https://doi.org/10.3390/math13142286

Chicago/Turabian Style

Liu, Yuetong, Chaolang Hu, Shiquan Zhang, and Qixiao Hu. 2025. "An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM" Mathematics 13, no. 14: 2286. https://doi.org/10.3390/math13142286

APA Style

Liu, Y., Hu, C., Zhang, S., & Hu, Q. (2025). An Optimization Model with “Perfect Rationality” for Expert Weight Determination in MAGDM. Mathematics, 13(14), 2286. https://doi.org/10.3390/math13142286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop