Two Hesitant Multiplicative Decision-Making Algorithms and Their Application to Fog-Haze Factor Assessment Problem

Hesitant multiplicative preference relation (HMPR) is a useful tool to cope with the problems in which the experts utilize Saaty’s 1–9 scale to express their preference information over paired comparisons of alternatives. It is known that the lack of acceptable consistency easily leads to inconsistent conclusions, therefore consistency improvement processes and deriving the reliable priority weight vector for alternatives are two significant and challenging issues for hesitant multiplicative information decision-making problems. In this paper, some new concepts are first introduced, including HMPR, consistent HMPR and the consistency index of HMPR. Then, based on the logarithmic least squares model and linear optimization model, two novel automatic iterative algorithms are proposed to enhance the consistency of HMPR and generate the priority weights of HMPR, which are proved to be convergent. In the end, the proposed algorithms are applied to the factors affecting selection of fog-haze weather. The comparative analysis shows that the decision-making process in our algorithms would be more straight-forward and efficient.


Introduction
In a group decision making (GDM) situation, the decision makers (DMs) are usually required to select the desirable alternative(s) from a collection of alternatives.To cope with this problem, DMs would compare alternatives with each other and provide the preference information, and a judgement matrix can be constructed [1][2][3].
In order to model DMs' knowledge and preferences, preference relations have been introduced.To characterize fuzziness and uncertainty, some kinds of extended preference relations have been introduced, including fuzzy preference relation (FPR) [4][5][6][7][8], multiplicative preference relation (MPR) [9][10][11][12] and linguistic preference relation (LPR) [13,14].The experts describe their preference information with a 0-1 scale crisp numbers in FPR, and they utilize a 1-9 scale to express their preference information in MPR [15].It is noted that the elements in MPRs are crisp values.However, considering the fuzziness and hesitation involved in practical decision-making problems, it may be difficult for DMs to express their evaluated information with crisp values.To describe the imprecision, the interval multiplicative preference relation [16] and the intuitionistic multiplicative preference relation (IMPR) [17] are introduced to express their decision-making preference information.Xia et al. [18] first defined the intuitionistic multiplicative preference relation (IMPR) and developed some IMPRs information aggregation techniques.Xia and Xu [18] introduced the concepts of hesitant fuzzy preference relation (HFPR) and hesitant multiplicative preference relation (HMPR) and studied their properties, which are followed by the construction of some methods of group decision making (GDM).
For various forms of preference relations, the most two important issues are consistency analysis and consistency improvement [19].Ma et al. [20] developed an approach to check the inconsistency and weak transitivity for the FPR and to repair its inconsistency for reaching weak transitivity.Herrera-Viedma et al. [21] designed a method to construct consistent FPRs from a set of original preference data.For a given MPR, Xu and Wei [16] proposed a convergent model to improve its consistency.With the help of the Abelian linearly ordered group, Xia and Chen [22] established a general improving consistency and reaching consensus methods for different types of preference relations.By using the order consistency and multiplicative consistency, Jin et al. [23] proposed two new approaches for GDM with IFPRs to produce the normalized intuitionistic fuzzy weights for alternatives.Wang [24] proposed some linear programming models for deriving intuitionistic fuzzy weights.For the unbalanced LPRs, Dong et al. [25] investigated an optimization model to increase the consistency level.Pei et al. [26] developed an iterative algorithm to adjust the additive consistency of IFLPRs and derive the intuitionistic fuzzy weights for IFLPRs.Based on β-normalization, Zhu et al. [27] utilized the optimized parameter to develop a novel approach for inconsistent HFPRs.Under the hesitant fuzzy preference information environment, Zhang et al. [28] constructed a decision support model to derive the most desirable alternative.
Similar to MPRs, studying the HMPR is an important research topic.However, there are few techniques in the existing literature have been done about it.Xia and Xu [18] directly used the proposed operators to aggregate the HMPRs information.However, it is generally known that the unacceptable consistency preference relations easily lead to inconsistent conclusions.Therefore, the decision-making results that obtained by the method in Xia and Xu [18] may be unreasonable.Based on the β-normalization principle, Zhang and Wu [28] investigated a new decision-making model to generate the interval weights of alternatives from HMPRs.However, with the algorithm in Zhang and Wu [28], one must convert normalized HMPR into several MPRs, and it seems that the decision-making process is an indirect computation process.Therefore, deriving the priority weight vector of the HMPR efficiently and improving the consistency of HMPR are two most two important issues.This paper first introduces a new version of HMPR, and then the consistency of HMPR and consistency index of HMPR are presented.After that, two new algorithms are investigated to improve the consistency of HMPRs.
To do this, the remainder of this paper is organized as follows: Section 2 reviews some of the basic concepts.In Section 3, the definitions of HMPR, consistency of HMPR and consistency index of HMPR are presented.Two algorithms to improve the consistency level for HMPRs are investigated in Section 4. Section 5 provides an illustrative example to show the effectiveness and rationality of the proposed methods.Concluding remarks are presented in Section 6.

Preliminaries
In this section, we review some related work of the MPR and hesitant multiplicative set (HMS).Saaty [15] first introduced the concept of MPR, which is a useful tool to express evaluation information.For convenience, let X = {x 1 , x 2 , • • • , x n } be a finite set of alternatives and N = {1, 2, • • • , n}.

Definition 1. [15] An MPR A on X is represented by
where a ij denotes the ratio of preferred degree of alternative x i with respect to x j .
In particular, as Saaty [29] showed the 1-9 scale, , and a ij = 1 denotes that there is no difference between x i and x j , a ij = 9 denotes x i is absolutely preferred to x j , and For a MPR A = (a ij ) n×n , if there exists a crisp priority weight vector w = (w 1 , , where w i > 0, i ∈ N, and ∑ n i=1 w i = 1.Because of the complexity and uncertainty involved in practical GDM problems, it may be difficult for DMs to express their preference information with only one crisp number, but they can be represented by a few different crisp values.
, which denotes all of the possible membership degrees of the element x ∈ X for the set H, |b

Hesitant Multiplicative Preference Relations and Consistency Index
In what follows, inspired by MPR and score function of HME, we define a new version of HMPR, and then the consistency of HMPR and consistency index of HMPR are presented.Definition 5.An HMPR P on X can be defined as reciprocal matrix P = (p ij ) n×n ⊂ X × X, where p ij is an HME, which indicates the possible preference degrees of alternative x i over x j , and it satisfies where f (p ij ) and p ij are the score function of p ij and the number of values in p ij , respectively.

For a HMPR
On the other hand, from Definition 5, one can obtain that f (p ij ) • f (p ji ) = 1, i, j ∈ N. Therefore, by using score function, we can transform the HMPR P into an MPR F = ( f ij ) n×n , where Therefore, the following consistency of HMPR is introduced.Definition 6. Assume that P = (p ij ) n×n is an HMPR, where p ij is HME, then P = (p ij ) n×n is called consistent HMPR, if there exists a normalized crisp weight vector w = (w 1 , where From Equation (2), we have ln f (p ij ) = ln w i − ln w j , ∀i, j ∈ N.However, for the HMPR provided by DMs, it is difficult to satisfy the consistency, and then Equation ( 2) cannot hold, it means that there exist i, j ∈ N, such that ln f (p ij ) = ln w i − ln w j , then we can use (ln f (p ij ) − (ln w i − ln w j )) 2 = (ln f (p ij ) − ln w i + ln w j ) 2 to measure the deviation between ln f (p ij ) and ln w i − ln w j .Therefore, the values of (ln f (p ij ) − ln w i + ln w j ) 2 (i, j ∈ N) can be used to measure the consistency level of the HMPR P = (p ij ) n×n .Definition 7. Assume that P = (p ij ) n×n is an HMPR, w = (w 1 , w 2 , • • • , w n ) T is the priority weight vector derived from P satisfying w i > 0, ∀i ∈ N, ∑ n i=1 w i = 1, then the consistency index of P is defined as The smaller is the value of CI(P), the better is the consistency of HMPR P. If CI(P) = 0, then P is consistent.If we provide the threshold δ 0 and CI(P) ≤ δ 0 , then P is called of acceptable consistency.

Consistency Repaired Methods for an HMPR
Motivated by the logarithmic least squares model [31], the priority weight vector can be derived by using the following optimization model: thus, the developed optimization model (M-1) can be converted into the following optimization model: According to Definition 6, we know that an HMPR P = (p ij ) n×n is consistent, then there exists a normalized crisp weight vector w = (w 1 , w 2 , • • • , w n ) T , such that f (p ij ) = w i /w j , ∀i, j ∈ N, i.e., ln f (p ij ) − ln w i + ln w j = 0, i, j ∈ N, then we have i.e., , one can get that Therefore, the following Algorithm 1 is designed to be adjusted the consistency of the HMPR P: Algorithm 1: The consistency adjusting process of HMPR based on logarithm least squares model Step 1. Suppose that P (t) = (p (t) ij ) n×n = P = (p ij ) n×n and t = 0, and pre-set the threshold δ 0 , the controlling parameter θ and the maximum number of iterations t max ; Step 2. Derive the priority vector w (t) = ( w T by Equation (6); Step 3. Determine the consistency index CI(P (t) ) by using Equation (3); Step 4. If CI(P (t) ) ≤ δ 0 or t > t max , then go to Step 7. Otherwise, go to Step 5; Step 5. Let P (t+1) = (p , where Step 6.Let t = t + 1 and return to Step 2; Step 7. Output P (t) , w (t) , CI(P (t) ) and t; Step 8. End.
In the following, we prove that the developed Algorithm 1 is convergent.
Theorem 1.Let P = (p ij ) n×n be an HMPR, θ(0 < θ < 1) be the adjusted parameter, P (t) be a collection of HMPRs in Algorithm 1.If CI(P (t) ) is the consistency index of P (t) , then we have CI(P (t+1) ) < CI(P (t) ) for each t, and lim t→∞ CI(P (t) ) = 0 (7) Proof.Suppose that w (t) = ( w T is the priority weight vector of P (t) for each t, from the above analysis, we know that w (t) = ( w T also is the optimal weight vector by solving model (M-2) for P (t) .Thus, we have In addition, according to Step 5 in Algorithm 1, we have Therefore, i.e., CI(P (t+1) ) = (1 − θ)CI(P (t) ) < CI(P (t) ) for each t.
Furthermore, on the one hand, according to Equation ( 10), we get that On the other hand, it is obvious that CI(P (t) ) ≥ 0, hence lim t→∞ CI(P (t) ) = 0.
From Definition 6, if HMPR P = (p ij ) n×n is consistent, then Equation (2) holds.Hence, it can be rewritten as ln f (p ij ) = ln w i − ln w j , ∀i, j ∈ N.However, in many real situations, due to fuzziness and uncertainty, the HMPR provided by DMs is usually inconsistent, thus Equation (2) cannot hold, i.e., there exist (i, j) ∈N × N, such that ln f (p ij ) = ln w i − ln w j .In this case, some non-negative deviation variables The smaller is the value of deviation variables d − ij and d + ij , the better is the consistency of HMPR.Therefore, we develop a linear optimization model to derive the smallest deviation variables and priority weight vector as follows: From Definition 5 and Equation (11), one can obtain that Therefore, we can obtain the following simplified optimization model: By using MATLAB or LINGO, we obtain the priority vector w = (w 1 , w 2 , • • • , w n ) T and optimal nonzero deviation values Therefore, the following Algorithm 2 is designed to improve the consistency of the HMPR P: Algorithm 2: The consistency adjusting process of HMPR based on linear optimization model Step 1'.See Algorithm 1; Step 2'.According to model (M − 4), we get the optimal nonzero deviation values d and the priority weight vector w (t) = ( w T ; Step 3'-4'.See Algorithm 1; Step 4. If CI(P (t) ) ≤ δ 0 or t > t max , then go to Step 7. Otherwise, go to Step 5; Step 5'.Let P (t+1) = (p , where p Step 6'-8'.See Algorithm 1.
Next, we will prove that the developed Algorithm 2 is convergent.
Theorem 2. Let P = (p ij ) n×n be an HMPR, θ(0 < θ < 1) be the adjusted parameter, P (t) be a collection of HMPRs in Algorithm 2, CI(P (t) ) be the consistency index of P (t) , then we have CI(P (t+1) ) < CI(P (t) ) for each t, and lim t→∞ CI(P (t) ) = 0 Proof.The proof of Theorem 2 is similar to Theorem 1.

Numerical Example
There is a city that was affected by fog-haze for a long time, and the scientists found that there are four main influence factors x 1 , x 2 , x 3 , x 4 for this city's fog-haze.In order to determine the most important influence factor and rank these factors for fog-haze, a group scientist compares these four factors with each other and then provides the following preference information, HMPR P = (p ij ) 4×4 [23]: . Now, we apply this paper's Algorithms 3 and 4 respectively to select the most important factor for fog-haze.= 0, and the priority weight vector can be obtained as follows: w (0) = (0.0745, 0.4695, 0.0994, 0.3566) T .