Next Article in Journal
Goal Recognition Control under Network Interdiction Using a Privacy Information Metric
Next Article in Special Issue
A New Type of Single Valued Neutrosophic Covering Rough Set Model
Previous Article in Journal
Research on Safety Regulation of Chemical Enterprise under Third-Party Mechanism: An Evolutionary Approach
Previous Article in Special Issue
Modeling the Performance Indicators of Financial Assets with Neutrosophic Fuzzy Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalization of Maximizing Deviation and TOPSIS Method for MADM in Simplified Neutrosophic Hesitant Fuzzy Environment

1
Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan
2
Department of Mathematics, Government College Women University Faisalabad, Punjab 38000, Pakistan
3
Science Department 705 Gurley Ave., University of New Mexico Mathematics, Gallup, NM 87301, USA
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(8), 1058; https://doi.org/10.3390/sym11081058
Submission received: 25 June 2019 / Revised: 26 July 2019 / Accepted: 1 August 2019 / Published: 17 August 2019

Abstract

:
With the development of the social economy and enlarged volume of information, the application of multiple-attribute decision-making (MADM) has become increasingly complex, uncertain, and obscure. As a further generalization of hesitant fuzzy set (HFS), simplified neutrosophic hesitant fuzzy set (SNHFS) is an efficient tool to process the vague information and contains the ideas of a single-valued neutrosophic hesitant fuzzy set (SVNHFS) and an interval neutrosophic hesitant fuzzy set (INHFS). In this paper, we propose a decision-making approach based on the maximizing deviation method and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) to solve the MADM problems, in which the attribute weight information is incomplete, and the decision information is expressed in simplified neutrosophic hesitant fuzzy elements. Firstly, we inaugurate an optimization model on the basis of maximizing deviation method, which is useful to determine the attribute weights. Secondly, using the idea of the TOPSIS, we determine the relative closeness coefficient of each alternative and based on which we rank the considered alternatives to select the optimal one(s). Finally, we use a numerical example to show the detailed implementation procedure and effectiveness of our method in solving MADM problems under simplified neutrosophic hesitant fuzzy environment.

1. Introduction

The concept of neutrosophy was originally introduced by Smarandache [1] from a philosophical viewpoint. Gradually, it has been discovered that without a specific description, it is not easy to apply neutrosophic sets in real applications because a truth-membership, an indeterminacy-membership, and a falsity-membership degree, in non-standard unit interval ] 0 , 1 + [ , are independently assigned to each element in the set. After analyzing this difficulty, Smarandache [2] and Wang [3] initiated the notion of a single-valued neutrosophic set (SVNS) and made the first ever neutrosophic publication. Ye [4] developed the concept of simplified neutrosophic set (SNS). SNS, a subclass of a neutrosophic set, contains the ideas of a SVNS and an interval neutrosophic set (INS), which are very useful in real science and engineering applications with incomplete, indeterminate, and inconsistent information existing commonly in real situations. Torra and Narukawa [5] put forward the concept of HFS as another extension of fuzzy set [6]. HFS is an effective tool to represent vague information in the process of MADM, as it permits the element membership degree to a set characterized by a few possible values in [ 0 , 1 ] and can be accurately described in terms of the judgment of the experts.
Ye [7] introduced SVNHFS as an extension of SVNS in the spirit of HFS and developed the single-valued neutrosophic hesitant fuzzy weighted averaging and weighted geometric operator. The SVNHFS represents some uncertain, incomplete, and inconsistent situations where each element has certain different values characterized by truth-membership hesitant, indeterminacy-membership hesitant, and falsity-membership hesitant function. For instance, when the opinion of three experts is required for a certain statement, they may state that the possibility that the statement is true is { 0.3 , 0.5 , 0.8 } , and the statement is false is { 0.1 , 0.4 } , and the degree that they are not sure is { 0.2 , 0.7 , 0.8 } . For single-valued neutrosophic hesitant fuzzy notation, it can be expressed as { { 0.3 , 0.5 , 0.8 } , { 0.1 , 0.4 } , { 0.2 , 0.7 , 0.8 } } . Liu and Luo [8] discussed the certainty function, score function, and accuracy function of SVNHFS and proposed the single-valued neutrosophic hesitant fuzzy ordered weighted averaging operator and hybrid weighted averaging operator. Sahin and Liu [9] proposed the correlation coefficient with single-valued neutrosophic hesitant fuzzy information and successfully applied it to decision-making problems. Li and Zhang [10] introduced Choquet aggregation operators with single-valued neutrosophic hesitant fuzzy information for MADM. Juan-Juan et al. [11] developed a decision-making technique using geometric weighted Choquet integral Heronian mean operator for SVNHFSs. Wang and Li [12] developed the generalized prioritized weighted average operator, the generalized prioritized weighted geometric operator with SVNHFS, and further developed an approach on the basis of the proposed operators to solve MADM problems. Recently, Akram et al. [13,14,15,16] and Naz et al. [17,18,19] put forward certain novel decision-making techniques in the frame work of extended fuzzy set theory. Furthermore, Liu and Shi [20] proposed the concept of INHFS by combining INS with HFS and developed the generalized weighted operator, generalized ordered weighted operator, and generalized hybrid weighted operator with the proposed interval neutrosophic hesitant fuzzy information. Ye [21] and Kakati et al. [22] proposed the correlation coefficients and Choquet integrals, respectively, with INHFS. Mahmood et al. [23] discussed the vector similarity measures with SNHFS. In practical terms, the SNHFS measures the truth-membership, the indeterminacy-membership and the falsity-membership degree by SVNHFSs and INHFSs. The classical sets, fuzzy sets, intuitionistic fuzzy sets, SVNSs, INSs, SNSs, and HFSs are the particular situations of SNHFSs. In modeling vague and uncertain information, SNHFS is more flexible and practice.
In the theory of decision analysis, MADM is one of the most important branches and several beneficial models and approaches have been developed related to decision analysis. However, due to limited time, lack of data or knowledge, and the limited expertise of the expert about the problem, MADM process under simplified neutrosophic hesitant fuzzy circumstances, encounters the situations where the information about attribute weights is completely unknown or incompletely known. The existing approaches are not suitable to handle these situations. Furthermore, among some useful MADM methodologies, the maximizing deviation method and the TOPSIS provide a ranking approach, which is measured by the farthest distance from the negative-ideal solution (NIS) and the shortest distance from the positive-ideal solution (PIS). For all these, in this paper, we propose an innovative approach of maximizing deviation and TOPSIS to objectively determine the attribute weights and rank the alternatives with completely unknown or partly known attribute weights. We propose the new distance measure and discuss the application of SNHFSs to MADM. In the framework of TOPSIS, we construct a novel generalized method under the simplified neutrosophic hesitant fuzzy environment. As compared to the existing work, the SNHFSs availably depict more general decision-making situations.
The paper is structured as follows: Section 2 establishes a simplified neutrosophic hesitant fuzzy MADM based on maximizing deviations and TOPSIS. In Section 3, a numerical example is given to demonstrate the effectiveness of our model and method and finally we draw conclusions in Section 4.
SVNHFS as a more flexible general formal framework extends the concept of fuzzy set [6], intuitionistic fuzzy set [24], SVNS [3] and HFS [25]. Ye [7] proposed the following definition of SVNHFS.
Definition 1.
[7] Let Z be a fixed set, a SVNHFS n on Z is defined as
n = { z , t ( z ) , i ( z ) , f ( z ) | z Z }
where t ( z ) , i ( z ) , f ( z ) are the sets of a few values in [ 0 , 1 ] , representing the possible truth-membership hesitant degree, indeterminacy-membership hesitant degree and falsity-membership hesitant degree of the element z to n , respectively; t ( z ) = { γ 1 , γ 2 , , γ l } , γ 1 , γ 2 , , γ l are the elements of t ( z ) ; i ( z ) = { δ 1 , δ 2 , , δ p } , δ 1 , δ 2 , , δ p are the elements of i ( z ) ; f ( z ) = { η 1 , η 2 , , η q } , η 1 , η 2 , , η q are the elements of f ( z ) , for every z Z ; and l , p , q denote, respectively, the numbers of the hesitant fuzzy elements in  t , i , f .
For simplicity, the expression n ( z ) = { t ( z ) , i ( z ) , f ( z ) } is called a single-valued neutrosophic hesitant fuzzy element (SVNHFE), which we represent by simplified symbol n = { t , i , f } .
Definition 2.
[7] Let n , n 1 and n 2 be three SVNHFEs. Then their operations are defined as follows:
1. 
n 1 n 2 = γ 1 t 1 , δ 1 i 1 , η 1 f 1 , γ 2 t 2 , δ 2 i 2 , η 2 f 2 { { γ 1 + γ 2 γ 1 γ 2 } , { δ 1 δ 2 } , { η 1 η 2 } } ;
2. 
n 1 n 2 = γ 1 t 1 , δ 1 i 1 , η 1 f 1 , γ 2 t 2 , δ 2 i 2 , η 2 f 2 { { γ 1 γ 2 } , { δ 1 + δ 2 δ 1 δ 2 } , { η 1 + η 2 η 1 η 2 } } ;
3. 
ς n = γ t , δ i , η f { { 1 ( 1 γ ) ς } , { δ ς } , { η ς } } ; ς > 0
4. 
n ς = γ t , δ i , η f { { γ ς } , { 1 ( 1 δ ) ς } , { 1 ( 1 η ) ς } } ς > 0 .

2. TOPSIS and Maximizing Deviation Method for Simplified Neutrosophic Hesitant Fuzzy Multi-Attribute Decision-Making

In this section, we propose the normalization technique and the distance measures of SNHFSs and based on this we develop further a new decision-making approach based on maximum deviation and TOPSIS under simplified neutrosophic hesitant fuzzy circumstances to explore the application of SNHFSs to MADM.

2.1. TOPSIS and Maximizing Deviation Method for Single-Valued Neutrosophic Hesitant Fuzzy Multi-Attribute Decision-Making

In this subsection, we only use SVNHFSs in SNHFSs and develop a new decision-making approach, by combining the idea of SVNHFSs with maximizing deviation, to solve a MADM problem in single-valued neutrosophic hesitant fuzzy environment.

2.1.1. Description of the MADM Problem

Consider a MADM problem containing a discrete set of m alternatives { A 1 , A 2 , , A m } and a set of all attributes P = { P 1 , P 2 , , P n } . The evaluation information of the ith alternative with respect to the jth attribute is a SVNHFE n i j = t i j , i i j , f i j , where t i j , i i j and f i j indicate the preference degree, uncertain degree, and falsity degree, respectively, of the decision maker facing the ith alternative that satisfied the jth attribute. Then the single-valued neutrosophic hesitant fuzzy decision matrix (SVNHFDM) N , can be constructed as follows:
N = n 11 n 12 n 1 n n 21 n 22 n 2 n n m 1 n m 2 n m n
Assume that each attribute has different importance, the weight vector of all attributes is defined as w = ( w 1 , w 2 , , w n ) t , where 0 w j 1 and j = 1 n w j = 1 with w j representing the importance degree of the attribute P j . Due to the complexity of the practical decision-making problems, the attribute weights information is frequently incomplete. For ease, let ℑ be the set of the known information about attribute weights, which we can construct by the following forms, for  i j :
(i)
w i w j   (weak ranking);
(ii)
w i w j α i , α i > 0   (strict ranking);
(iii)
w i w j w k w l , for j k l   (ranking of differences);
(iv)
w i α i w j , 0 α i 1   (ranking with multiples);
(v)
α i w i α i + ξ i , 0 α i α i + ξ i 1   (interval form).
In the comparison of SVNHFEs, the number of their corresponding element may be unequal. To handle this situation, we normalize the SVNHFEs as follows:
Suppose that n = { t , i , f } is a SVNHFE, then γ ¯ = ϖ γ + + ( 1 ϖ ) γ , δ ¯ = ϖ δ + + ( 1 ϖ ) δ and η ¯ = ϖ η + + ( 1 ϖ ) η are the added truth-membership, the indeterminacy-membership and the falsity-membership degree, respectively, where γ and γ + are the minimum and the maximum elements of t , respectively, δ and δ + are the minimum and the maximum elements of i , respectively, η and η + are the minimum and the maximum elements of f , respectively, and  ϖ [ 0 , 1 ] is a parameter assigned by the expert according to his risk preference.
For the normalization of SVNHFE, different values of ϖ produce different results for the added truth-membership, the indeterminacy-membership and the falsity-membership degree. Usually, there are three cases of the preference of the expert:
  • If ϖ = 0 , the pessimist expert may add the minimum truth-membership degree γ , the minimum indeterminacy-membership degree δ and the minimum falsity-membership degree η .
  • If ϖ = 0.5 , the neutral expert may add the truth-membership degree γ + γ + 2 , the indeterminacy-membership degree δ + δ + 2 and the falsity-membership degree η + η + 2 .
  • If ϖ = 1 , the optimistic expert may add the maximum truth-membership degree γ , the maximum indeterminacy-membership degree δ and the maximum falsity-membership degree η .
For instance, if we have two SVNHFEs n 1 = { t 1 , i 1 , f 1 } = { { 0.3 , 0.5 } , { 0.4 , 0.6 , 0.8 } , { 0.5 , 0.7 } } , n 2 = { t 2 , i 2 , f 2 } = { { 0.1 , 0.4 , 0.5 } , { 0.6 , 0.7 } , { 0.2 , 0.6 , 0.9 } } . Here # t 1 = 2 , # i 1 = 3 , # f 1 = 2 , # t 2 = 3 , # i 2 = 2 and # f 2 = 3 . Clearly, # t 1 # t 2 , # i 1 # i 2 , and  # f 1 # f 2 . The truth-membership and the falsity-membership degree of n 1 , while the indeterminacy-membership degree of n 2 need to be pre-treated.
If ϖ = 0 , then we may add the minimum truth-membership degree or the indeterminacy-membership degree or the falsity-membership degree for the target object. For the SVNHFE n 1 , the truth-membership and falsity-membership degree of n 1 can be attained as { 0.3 , 0.3 , 0.5 } and { 0.5 , 0.5 , 0.7 } , i.e., n 1 can be normalized as n 1 = { { 0.3 , 0.3 , 0.5 } , { 0.4 , 0.6 , 0.8 } , { 0.5 , 0.5 , 0.7 } } . For the SVNHFE n 2 , the indeterminacy-membership degree of n 2 can be obtained as {0.6,0.6,0.7}, i.e.,  n 2 is normalized as n 2 = { { 0.1 , 0.4 , 0.5 } , { 0.6 , 0.6 , 0.7 } , { 0.2 , 0.6 , 0.9 } } .
If ϖ = 0.5 , then we may add the average truth-membership degree or the indeterminacy-membership degree or the falsity-membership degree for the target object. For the SVNHFE n 1 , the truth-membership and falsity-membership degree of n 1 can be attained as { 0.3 , 0.4 , 0.5 } and { 0.5 , 0.6 , 0.7 } , i.e., n 1 can be normalized as n 1 = { { 0.3 , 0.4 , 0.5 } , { 0.4 , 0.6 , 0.8 } , { 0.5 , 0.6 , 0.7 } } . For the SVNHFE n 2 , the indeterminacy-membership degree of n 2 can be obtained as {0.6,0.65,0.7}, i.e.,  n 2 is normalized as n 2 = { { 0.1 , 0.4 , 0.5 } , { 0.6 , 0.65 , 0.7 } , { 0.2 , 0.6 , 0.9 } } .
If ϖ = 1 , then we may add the maximum truth-membership degree or the indeterminacy-membership degree or the falsity-membership degree for the normalization. For the SVNHFE n 1 , the truth-membership and falsity-membership degree of n 1 can be attained as { 0.3 , 0.5 , 0.5 } and { 0.5 , 0.7 , 0.7 } , i.e., n 1 is normalized as n 1 = { { 0.3 , 0.5 , 0.5 } , { 0.4 , 0.6 , 0.8 } , { 0.5 , 0.7 , 0.7 } } . For the SVNHFE n 2 , the indeterminacy-membership degree of n 2 can be attained as {0.6,0.7,0.7}, i.e.,  n 2 is normalized as n 2 = { { 0.1 , 0.4 , 0.5 } , { 0.6 , 0.7 , 0.7 } , { 0.2 , 0.6 , 0.9 } } .
The algorithm for the normalization of SVNHFEs is given in Algorithm 1.
Algorithm 1 The algorithm for the normalization of SVNHFEs.
INPUT: Two SVNHFEs n 1 = ( t 1 , i 1 , f 1 ) , n 2 = ( t 2 , i 2 , f 2 ) and the value of ϖ .
OUTPUT: The normalization of n 1 = ( t 1 , i 1 , f 1 ) and n 2 = ( t 2 , i 2 , f 2 ) .
  1:  Count the number of elements of n 1 and n 2 , i.e.,  # t 1 , # i 1 , # f 1 , # t 2 , # i 2 , # f 2 ;
  2:  Determine the minimum and the maximum of the elements of n 1 and n 2 ;
  3:   t = arg min i = 1 , 2 # t i , i = arg min i = 1 , 2 # i i , f = arg min i = 1 , 2 # f i ;
  4:  if # t 1 = # t 2 then break;
  5:  else if t = # t 1 then
  6:   n = # t 2 # t 1 ;
  7:  Determine the value of γ ¯ for t 1 ;
  8:  for i = 1:1:n do
  9:     t 1 = t 1 γ ¯ ;
10:  end for
11:  else
12:   n = # t 1 # t 2 ;
13:  Determine the value of γ ¯ for t 2 ;
14:  for i = 1:1:n do
15:     t 2 = t 2 γ ¯ ;
16:  end for
17:  end if
18:  if # i 1 = # i 2 then break;
19:  else if i = # i 1 then
20:   n = # i 2 # i 1 ;
21:  Determine the value of δ ¯ for i 1 ;
22:  for i = 1:1:n do
23:     i 1 = i 1 δ ¯ ;
24:  end for
25:  else
26:   n = # i 1 # i 2 ;
27:  Determine the value of δ ¯ for i 2 ;
28:  for i = 1:1:n do
29:     i 2 = i 2 δ ¯ ;
30:  end for
31:  end if
32:  if # f 1 = # f 2 then break;
33:  else if f = # f 1 then
34:   n = # f 2 # f 1 ;
35:  Determine the value of η ¯ for f 1 ;
36:  for i = 1:1:n do
37:     f 1 = f 1 η ¯ ;
38:  end for
39:  else
40:   n = # f 1 # f 2 ;
41:  Determine the value of η ¯ for f 2 ;
42:  for i = 1:1:n do
43:     f 2 = f 2 η ¯ ;
44:  end for
45:  end if

2.1.2. The Distance Measures for SVNHFSs

Definition 3.
Let n 1 = { t 1 , i 1 , f 1 } and n 2 = { t 2 , i 2 , f 2 } be two normalized SVNHFEs, then the single-valued neutrosophic hesitant fuzzy Hamming distance between n 1 and n 2 can be defined as follows:
d 1 ( n 1 , n 2 ) = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) γ 2 σ ( ς ) + 1 # i ς = 1 # i δ 1 σ ( ς ) δ 2 σ ( ς ) + 1 # f ς = 1 # f η 1 σ ( ς ) η 2 σ ( ς ) ,
where # t = # t 1 = # t 2 , # i = # i 1 = # i 2 and # f = # f 1 = # f 2 . γ i σ ( ς ) , δ i σ ( ς ) and η i σ ( ς ) are the ςth largest values in γ i , δ i and η i , respectively ( i = 1 , 2 ) .
In addition, the single-valued neutrosophic hesitant fuzzy Euclidean distance is defined as:
d 2 ( n 1 , n 2 ) = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) γ 2 σ ( ς ) 2 + 1 # i ς = 1 # i δ 1 σ ( ς ) δ 2 σ ( ς ) 2 + 1 # f ς = 1 # f η 1 σ ( ς ) η 2 σ ( ς ) 2 .
By using the geometric distance model of [26], the above distances can be generalized as follows:
d ( n 1 , n 2 ) = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) γ 2 σ ( ς ) α + 1 # i ς = 1 # i δ 1 σ ( ς ) δ 2 σ ( ς ) α + 1 # f ς = 1 # f η 1 σ ( ς ) η 2 σ ( ς ) α 1 α ,
where α is constant and α > 0 . Based on the value of α, the relationship among d ( n 1 , n 2 ) , d 1 ( n 1 , n 2 ) and d 2 ( n 1 , n 2 ) can be deduced as:
  • If α = 1 , then the distance d ( n 1 , n 2 ) = d 1 ( n 1 , n 2 ) .
  • If α = 2 , then the distance d ( n 1 , n 2 ) = d 2 ( n 1 , n 2 ) .
Therefore, the distance d ( n 1 , n 2 ) is a generalization of the single-valued neutrosophic hesitant fuzzy Hamming distance d 1 ( n 1 , n 2 ) and the single-valued neutrosophic hesitant fuzzy Euclidean distance d 2 ( n 1 , n 2 ) .
Theorem 1.
Let n 1 = { t 1 , i 1 , f 1 } and n 2 = { { 1 } , { 0 } , { 0 } } be two SVNHFEs, then the generalized distance d ( n 1 , n 2 ) can be calculated as:
d ( n 1 , n 2 ) = 1 3 1 # t 1 γ t 1 1 γ α + 1 # i 1 δ i 1 δ α + 1 # f 1 η f 1 η α 1 α
where n 2 is the normalization outcome of n 2 by the comparison of n 1 and n 2 .
Proof. 
Using (3), the generalized distance d ( n 1 , n 2 ) can be calculated as:
d ( n 1 , n 2 ) = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) γ 2 σ ( ς ) α + 1 # i ς = 1 # i δ 1 σ ( ς ) δ 2 σ ( ς ) α + 1 # f ς = 1 # f η 1 σ ( ς ) η 2 σ ( ς ) α 1 α = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) 1 α + 1 # i ς = 1 # i δ 1 σ ( ς ) 0 α + 1 # f ς = 1 # f η 1 σ ( ς ) 0 α 1 α = 1 3 1 # t ς = 1 # t 1 γ 1 σ ( ς ) α + 1 # i ς = 1 # i δ 1 σ ( ς ) α + 1 # f ς = 1 # f η 1 σ ( ς ) α 1 α = 1 3 1 # t 1 ς = 1 # t 1 1 γ 1 σ ( ς ) α + 1 # i 1 ς = 1 # i 1 δ 1 σ ( ς ) α + 1 # f 1 ς = 1 # f 1 η 1 σ ( ς ) α 1 α = 1 3 1 # t 1 γ t 1 1 γ α + 1 # i 1 δ i 1 δ α + 1 # f 1 η f 1 η α 1 α .
 □
Theorem 2.
Let n 1 = { t 1 , i 1 , f 1 } and n 2 = { { 0 } , { 1 } , { 1 } } be two SVNHFEs, then the generalized distance d ( n 1 , n 2 ) can be calculated as:
d ( n 1 , n 2 ) = 1 3 1 # t 1 γ t 1 γ α + 1 # i 1 δ i 1 1 δ α + 1 # f 1 η f 1 1 η α 1 α .
where n 2 is the normalization outcome of n 2 by the comparison of n 1 and n 2 .
Proof. 
Using (3), the generalized distance d ( n 1 , n 2 ) can be calculated as:
d ( n 1 , n 2 ) = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) γ 2 σ ( ς ) α + 1 # i ς = 1 # i δ 1 σ ( ς ) δ 2 σ ( ς ) α + 1 # f ς = 1 # f η 1 σ ( ς ) η 2 σ ( ς ) α 1 α = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) 0 α + 1 # i ς = 1 # i δ 1 σ ( ς ) 1 α + 1 # f ς = 1 # f η 1 σ ( ς ) 1 α 1 α = 1 3 1 # t ς = 1 # t γ 1 σ ( ς ) α + 1 # i ς = 1 # i 1 δ 1 σ ( ς ) α + 1 # f ς = 1 # f 1 η 1 σ ( ς ) α 1 α = 1 3 1 # t 1 ς = 1 # t 1 γ 1 σ ( ς ) α + 1 # i 1 ς = 1 # i 1 1 δ 1 σ ( ς ) α + 1 # f 1 ς = 1 # f 1 1 η 1 σ ( ς ) α 1 α = 1 3 1 # t 1 γ t 1 γ α + 1 # i 1 δ i 1 1 δ α + 1 # f 1 η f 1 1 η α 1 α .
 □

2.1.3. Computation of Optimal Weights Using Maximizing Deviation Method

Case I: Completely unknown attribute weight information
Construct an optimization model on the basis of the approach of maximizing deviation to determine the attributes optimal relative weights with SVNHFS. For the attribute P j Z , the deviation of the alternative A i to all the other alternatives can be represented as:
D i j ( w ) = k = 1 m d ( n i j , n k j ) w j , i = 1 , 2 , , m , j = 1 , 2 , , n
where d ( n i j , n k j ) = 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α .
Let
D j ( w ) = i = 1 m D i j ( w ) = i = 1 m k = 1 m w j 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α ,
j = 1 , 2 , , n . Then D j ( w ) indicates the deviation value of all alternatives to other alternatives for the attribute P j Z .
On the basis of the above analysis, to select the weight vector w which maximizes all deviation values for all the attributes, a non-linear programming model is constructed as follows:
( M 1 ) max D ( w ) = j = 1 n i = 1 m k = 1 m w j 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α s . t . w j 0 , j = 1 , 2 , , n , j = 1 n w j 2 = 1
To solve the above model, we construct the Lagrange function:
L ( w , ξ ) = j = 1 n i = 1 m k = 1 m 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α w j + ξ 2 j = 1 n w j 2 1
where ξ is a real number, representing the Lagrange multiplier variable. Then we compute the partial derivatives of L and let:
L w j = i = 1 m k = 1 m 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α + ξ w j = 0 L ξ = 1 2 j = 1 n w j 2 1 = 0
By solving above equations, an exact and simple formula for determining the attribute weights can be obtained as follows:
w j * = i = 1 m k = 1 m 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α j = 1 n i = 1 m k = 1 m 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α 2
Because the weights of the attributes should satisfy the normalization condition, so we obtain the normalized attribute weights:
w j = i = 1 m k = 1 m 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α j = 1 n i = 1 m k = 1 m 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α
Case II: Partly known attribute weight information
However, there are some situations that the information about the weight vector is partially known instead of completely known. For such situations, on the basis of the set of the known weight information, ℑ, the constrained optimization model can be designed as:
( M 2 ) max D ( w ) = j = 1 n i = 1 m k = 1 m w j 1 3 1 # t ς = 1 # t γ i j σ ( ς ) γ k j σ ( ς ) α + 1 # i ς = 1 # i δ i j σ ( ς ) δ k j σ ( ς ) α + 1 # f ς = 1 # f η i j σ ( ς ) η k j σ ( ς ) α 1 α s . t . w , w j 0 , j = 1 , 2 , , n , j = 1 n w j = 1
where ℑ is also a set of constraint conditions that the weight value w j should satisfy according to the requirements in real situations. The model ( M 2 ) is a linear programming model. By solving this model, we obtain the optimal solution w = ( w 1 , w 2 , , w n ) t , which can be used as the attributes weight vector.

2.1.4. TOPSIS Method

Recently, several MADM techniques are established such as TOPSIS [27], TODIM [28], VIKOR [29], MULTIMOORA [30] and minimum deviation method [31]. TOPSIS method is attractive as limited subjective input is required from experts. It is quite well known that TOPSIS is a useful and easy approach helping an expert choose the optimal alternative according to both the minimal distance from the positive-ideal solution and the maximal distance from the negative-ideal solution. Therefore, after attaining the weight of attributes by using the maximizing deviation method, in this section, we develop a MADM approach based on TOPSIS model under single-valued neutrosophic hesitant fuzzy circumstances. The PIS A + , and the NIS A can be computed as:
(5) A + = { n 1 + , n 2 + , , n n + } (6) = { { { 1 } , { 0 } , { 0 } } , { { 1 } , { 0 } , { 0 } } , , { { 1 } , { 0 } , { 0 } } } .
(7) A = { n 1 , n 2 , , n n } (8) = { { { 0 } , { 1 } , { 1 } } , { { 0 } , { 1 } , { 1 } } , , { { 0 } , { 1 } , { 1 } } } .
Based on Equation (3), Theorems 1 and 2, the separation measures d i + and d i of each alternative from the single-valued neutrosophic hesitant fuzzy PIS A + and the NIS A , respectively, are determined as:
(9) d i + = j = 1 n d ( n i j , n j + ) w j = j = 1 n d ( n i j , { { 1 } , { 0 } , { 0 } } ) w j (10) = j = 1 n w j 1 3 1 # t ij γ t ij 1 γ α + 1 # i ij δ i ij δ α + 1 # f ij η f ij η α 1 α ,
(11) d i = j = 1 n d ( n i j , n j ) w j = j = 1 n d ( n i j , { { 0 } , { 1 } , { 1 } } ) w j (12) = j = 1 n w j 1 3 1 # t ij γ t ij γ α + 1 # i ij δ i ij 1 δ α + 1 # f ij η f ij 1 η α 1 α ,
where i = 1 , 2 , , m .
The relative closeness coefficient of an alternative A i with respect to the single-valued neutrosophic hesitant fuzzy PIS A + can be defined as follows:
R C ( A i ) = d i d i + + d i
where 0 R C ( A i ) 1 , i = 1 , 2 , , m . The ranking orders of all alternatives can be determined according to the closeness coefficient C R ( A i ) and select the best one(s) from a set of appropriate alternatives.
The scheme of the proposed MADM technique is given in Figure 1. The detailed algorithm is constructed as follows:
Step 1.
Construct the decision matrix N = [ n i j ] m × n for the MADM problem, where the entries n i j ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) are SVNHFEs, given by the decision makers, for the alternative A i according to the attribute P j .
Step 2.
On the basis of Equation (4) determine the attribute weights w = ( w 1 , w 2 , , w m ) t , if the attribute weights information is completely unknown, and turn to Step 4. Otherwise go to Step 3.
Step 3.
Use model (M-2) to determine the attribute weights w = ( w 1 , w 2 , , w m ) t , if the information about the attribute weights is partially known.
Step 4.
Based on Equations (6) and (8), we determine the corresponding single-valued neutrosophic hesitant fuzzy PIS A + and the single-valued neutrosophic hesitant fuzzy NIS A , respectively.
Step 5.
Based on Equations (10) and (12), we compute the separation measures d i + and d i of each alternative A i from the single-valued neutrosophic hesitant fuzzy PIS A + and the single-valued neutrosophic hesitant fuzzy NIS A , respectively.
Step 6.
Based on Equation (13), we determine the relative closeness coefficient R C ( A i ) ( i = 1 , 2 , , m ) of each alternative A i to the single-valued neutrosophic hesitant fuzzy PIS A + .
Step 7.
Rank the alternatives A i ( i = 1 , 2 , , m ) based on the relative closeness coefficients R C ( A i ) ( i = 1 , 2 , , m ) and select the optimal one(s).

2.2. TOPSIS and Maximizing Deviation Method for Interval Neutrosophic Hesitant Fuzzy Multi-Attribute Decision-Making

In this subsection, we only use INHFSs in SNHFSs and put forward a novel decision-making approach, by combining the idea of INHFSs with maximizing deviation, to solve a MADM problem in interval neutrosophic hesitant fuzzy environment.
Definition 4
([20]). Let Z be a fixed set, an INHFS n ˜ on Z is defined as:
n ˜ = { z , t ˜ ( z ) , i ˜ ( z ) , f ˜ ( z ) | z Z }
where t ˜ ( z ) , i ˜ ( z ) , f ˜ ( z ) are sets of some interval-values in [ 0 , 1 ] , indicating the possible truth-membership hesitant degree, indeterminacy-membership hesitant degree and falsity-membership hesitant degree of the element z to n ˜ , respectively; t ˜ ( z ) = { γ ˜ 1 , γ ˜ 2 , , γ ˜ l } , γ ˜ 1 , γ ˜ 2 , , γ ˜ l are the elements of t ˜ ( z ) ; i ˜ ( z ) = { δ ˜ 1 , δ ˜ 2 , , δ ˜ p } , δ ˜ 1 , δ ˜ 2 , , δ ˜ p are the elements of i ˜ ( z ) ; f ˜ ( z ) = { η ˜ 1 , η ˜ 2 , , η ˜ q } , η ˜ 1 , η ˜ 2 , , η ˜ q are the elements of f ˜ ( z ) , for every z Z ; and l , p , q denote, respectively, the numbers of the interval-valued hesitant fuzzy elements in t ˜ , i ˜ , f ˜ .
For convenience, the expression n ˜ ( z ) = { t ˜ ( z ) , i ˜ ( z ) , f ˜ ( z ) } is called an interval neutrosophic hesitant fuzzy element (INHFE), which we represent by simplified symbol n ˜ = { t ˜ , i ˜ , f ˜ } .
Similar to Section 2.1, we consider a MADM problem, where A = { A 1 , A 2 , , A m } is a discrete set of m alternatives and P = { P 1 , P 2 , , P n } is a set of n attributes. The evaluation information of the ith alternative with respect to the jth attribute is an INHFE n ˜ i j = t ˜ i j , i ˜ i j , f ˜ i j , where t ˜ i j , i ˜ i j and f ˜ i j indicate the interval-valued preference degree, interval-valued uncertain degree, and interval-valued falsity degree, respectively, of the expert facing the ith alternative that satisfied the jth attribute. Then the interval neutrosophic hesitant fuzzy decision matrix (INHFDM) N ˜ , can be constructed as follows:
N ˜ = n ˜ 11 n ˜ 12 n ˜ 1 n n ˜ 21 n ˜ 22 n ˜ 2 n n ˜ m 1 n ˜ m 2 n ˜ m n
In the comparison of INHFEs, the number of their corresponding element may be unequal. To handle this situation, we normalize the INHFEs as follows:
Suppose that n ˜ = { t ˜ , i ˜ , f ˜ } is an INHFE, then γ ˜ ¯ = ϖ γ ˜ + + ( 1 ϖ ) γ ˜ , δ ˜ ¯ = ϖ δ ˜ + + ( 1 ϖ ) δ ˜ and η ˜ ¯ = ϖ η ˜ + + ( 1 ϖ ) η ˜ are the added truth-membership, the indeterminacy-membership and the falsity-membership degree, respectively, where γ ˜ , γ ˜ + , δ ˜ , δ ˜ + and η ˜ , η ˜ + are the minimum and the maximum elements of t ˜ , i ˜ and f ˜ , respectively, and ϖ [ 0 , 1 ] is a parameter assigned by the expert according to his risk preference.
For the normalization of INHFE, different values of ϖ produce different results for the added truth-membership, the indeterminacy-membership and the falsity-membership degree. Usually, there are three cases of the preference of the expert:
  • If ϖ = 0 , the pessimist expert may add the minimum truth-membership degree γ ˜ , the minimum indeterminacy-membership degree δ ˜ and the minimum falsity-membership degree η ˜ .
  • If ϖ = 0.5 , the neutral expert may add the truth-membership degree γ ˜ + γ ˜ + 2 , the indeterminacy-membership degree δ ˜ + δ ˜ + 2 and the falsity-membership degree η ˜ + η ˜ + 2 .
  • If ϖ = 1 , the optimistic expert may add the maximum truth-membership degree γ ˜ + , the maximum indeterminacy-membership degree δ ˜ + and the maximum falsity-membership degree η ˜ + .
The algorithm for the normalization of INHFEs is given in Algorithm 2.
Algorithm 2 The algorithm for the normalization of INHFEs.
INPUT: Two INHFEs n ˜ 1 = ( t ˜ 1 , i ˜ 1 , f ˜ 1 ) and n ˜ 2 = ( t ˜ 2 , i ˜ 2 , f ˜ 2 ) and the value of ϖ ˜ .
OUTPUT: The normalization of n ˜ 1 = ( t ˜ 1 , i ˜ 1 , f ˜ 1 ) and n ˜ 2 = ( t ˜ 2 , i ˜ 2 , f ˜ 2 ) .
  1:  Count the number of elements of n ˜ 1 and n ˜ 2 , i.e.,  # t ˜ 1 , # i ˜ 1 , # f ˜ 1 , # t ˜ 2 , # i ˜ 2 , # f ˜ 2 ;
  2:  Determine the minimum and the maximum of the elements of n ˜ 1 and n ˜ 2 ;
  3:   t ˜ = arg min i = 1 , 2 # t ˜ i , i ˜ = arg min i = 1 , 2 # i ˜ i , f ˜ = arg min i = 1 , 2 # f ˜ i
  4:  if # t ˜ 1 = # t ˜ 2 then break;
  5:  else if t ˜ = # t ˜ 1 then
  6:   n = # t ˜ 2 # t ˜ 1 ;
  7:  Determine the value of γ ˜ for t ˜ 1 ;
  8:  for i = 1:1:n do
  9:     t ˜ 1 = t ˜ 1 γ ˜ ;
10:  end for
11:  else
12:   n = # t ˜ 1 # t ˜ 2 ;
13:  Determine the value of γ ˜ for t ˜ 2 ;
14:  for i = 1:1:n do
15:     t ˜ 2 = t ˜ 2 γ ˜ ;
16:  end for
17:  end if
18:  if # i ˜ 1 = # i ˜ 2 then break;
19:  else if i ˜ = # i ˜ 1 then
20:   n = # i ˜ 2 # i ˜ 1 ;
21:  Determine the value of δ ˜ for i ˜ 1 ;
22:  for i = 1:1:n do
23:     i ˜ 1 = i ˜ 1 δ ˜ ;
24:  end for
25:  else
26:   n = # i ˜ 1 # i ˜ 2 ;
27:  Determine the value of δ ˜ for i ˜ 2 ;
28:  for i = 1:1:n do
29:     i ˜ 2 = i ˜ 2 δ ˜ ;
30:  end for
31:  end if
32:  if # f ˜ 1 = # f ˜ 2 then break;
33:  else if f ˜ = # f ˜ 1 then
34:   n = # f ˜ 2 # f ˜ 1 ;
35:  Determine the value of η ˜ for f ˜ 1 ;
36:  for i = 1:1:n do
37:     f ˜ 1 = f ˜ 1 η ˜ ;
38:  end for
39:  else
40:   n = # f ˜ 1 # f ˜ 2 ;
41:  Determine the value of η ˜ for f ˜ 2 ;
42:  for i = 1:1:n do
43:     f ˜ 2 = f ˜ 2 η ˜ ;
44:  end for
45:  end if

2.2.1. The Distance Measures for INHFSs

Definition 5.
Let n ˜ 1 = { t ˜ 1 , i ˜ 1 , f ˜ 1 } and n ˜ 2 = { t ˜ 2 , i ˜ 2 , f ˜ 2 } be two normalized INHFEs, then we define the interval neutrosophic hesitant fuzzy Hamming distance between n ˜ 1 and n ˜ 2 as follows:
d ˜ 1 ( n ˜ 1 , n ˜ 2 ) = 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ 1 σ ( ς ) L γ ˜ 2 σ ( ς ) L + γ ˜ 1 σ ( ς ) U γ ˜ 2 σ ( ς ) U + 1 # i ˜ ς = 1 # i ˜ δ ˜ 1 σ ( ς ) L δ ˜ 2 σ ( ς ) L + δ ˜ 1 σ ( ς ) U δ ˜ 2 σ ( ς ) U + 1 # f ˜ ς = 1 # f ˜ η ˜ 1 σ ( ς ) L η ˜ 2 σ ( ς ) L + η ˜ 1 σ ( ς ) U η ˜ 2 σ ( ς ) U ,
where # t ˜ = # t ˜ 1 = # t ˜ 2 , # i ˜ = # i ˜ 1 = # i ˜ 2 and # f ˜ = # f ˜ 1 = # f ˜ 2 . γ ˜ i σ ( ς ) , δ ˜ i σ ( ς ) and η i σ ( ς ) are the ςth largest values in γ ˜ i , δ ˜ i and η ˜ i , respectively ( i = 1 , 2 ) .
In addition, the interval neutrosophic hesitant fuzzy Euclidean distance is defined as:
d ˜ 2 ( n ˜ 1 , n ˜ 2 ) = 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ 1 σ ( ς ) L γ ˜ 2 σ ( ς ) L 2 + γ ˜ 1 σ ( ς ) U γ ˜ 2 σ ( ς ) U 2 + 1 # i ˜ ς = 1 # i ˜ δ ˜ 1 σ ( ς ) L δ ˜ 2 σ ( ς ) L 2 + δ ˜ 1 σ ( ς ) U δ ˜ 2 σ ( ς ) U 2 + 1 # f ˜ ς = 1 # f ˜ η ˜ 1 σ ( ς ) L η ˜ 2 σ ( ς ) L 2 + η ˜ 1 σ ( ς ) U η ˜ 2 σ ( ς ) U 2 1 2 .
By using the geometric distance model of [26], the above distances can be generalized as follows:
d ˜ ( n ˜ 1 , n ˜ 2 ) = 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ 1 σ ( ς ) L γ ˜ 2 σ ( ς ) L α + γ ˜ 1 σ ( ς ) U γ ˜ 2 σ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ 1 σ ( ς ) L δ ˜ 2 σ ( ς ) L α + δ ˜ 1 σ ( ς ) U δ ˜ 2 σ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ 1 σ ( ς ) L η ˜ 2 σ ( ς ) L α + η ˜ 1 σ ( ς ) U η ˜ 2 σ ( ς ) U α 1 α ,
where α is constant and α > 0 . Based on the value of α, the relationship among d ˜ ( n ˜ 1 , n ˜ 2 ) , d ˜ 1 ( n ˜ 1 , n ˜ 2 ) and d ˜ 2 ( n ˜ 1 , n ˜ 2 ) can be deduced as:
  • If α = 1 , then the distance d ˜ ( n ˜ 1 , n ˜ 2 ) = d ˜ 1 ( n ˜ 1 , n ˜ 2 ) .
  • If α = 2 , then the distance d ˜ ( n ˜ 1 , n ˜ 2 ) = d ˜ 2 ( n ˜ 1 , n ˜ 2 ) .
Therefore, the distance d ˜ ( n ˜ 1 , n ˜ 2 ) is a generalization of the interval neutrosophic hesitant fuzzy Hamming distance d ˜ 1 ( n ˜ 1 , n ˜ 2 ) and the interval neutrosophic hesitant fuzzy Euclidean distance d ˜ 2 ( n ˜ 1 , n ˜ 2 ) .
Theorem 3.
Let n ˜ 1 = { t ˜ 1 , i ˜ 1 , f ˜ 1 } and n ˜ 2 = { { [ 1 , 1 ] } , { [ 0 , 0 ] } , { [ 0 , 0 ] } } be two INHFEs, then the generalized distance d ˜ ( n ˜ 1 , n ˜ 2 ) can be calculated as:
d ˜ ( n ˜ 1 , n ˜ 2 ) = 1 6 1 # t ˜ 1 γ ˜ t 1 ˜ ( 1 γ ˜ L α + 1 γ ˜ U α ) + 1 # i ˜ 1 δ ˜ i 1 ˜ ( ( δ ˜ L ) α + ( δ ˜ U ) α ) + 1 # f ˜ 1 η ˜ f 1 ˜ ( ( η ˜ L ) α + ( η ˜ U ) α ) 1 α .
where n ˜ 2 is the normalization outcome of n ˜ 2 by the comparison of n ˜ 1 and n ˜ 2 .
Theorem 4.
Let n ˜ 1 = { t ˜ 1 , i ˜ 1 , f ˜ 1 } and n ˜ 2 = { { [ 0 , 0 ] } , { [ 1 , 1 ] } , { [ 1 , 1 ] } } be two INHFEs, then the generalized distance d ˜ ( n ˜ 1 , n ˜ 2 ) can be calculated as:
d ˜ ( n ˜ 1 , n ˜ 2 ) = 1 6 1 # t ˜ 1 γ ˜ t ˜ 1 ( ( γ ˜ L ) α + ( γ ˜ U ) α ) + 1 # i ˜ 1 δ ˜ i ˜ 1 1 δ ˜ L α + 1 δ ˜ U α + 1 # f ˜ 1 η f ˜ 1 1 η ˜ L α + 1 η ˜ U α 1 α .
where n ˜ 2 is the normalization outcome of n ˜ 2 by the comparison of n ˜ 1 and n ˜ 2 .

2.2.2. Computation of Optimal Weights Using Maximizing Deviation Method

Case I: Completely unknown information on attribute weights
Using the maximizing deviation method, we construct an optimization model to determine the attributes optimal relative weights in interval neutrosophic hesitant fuzzy setting. For the attribute P j Z , the deviation of the alternative A i to all the other alternatives can be represented as:
D ˜ i j ( w ) = k = 1 m d ˜ ( n ˜ i j , n ˜ k j ) w j , i = 1 , 2 , , m , j = 1 , 2 , , n
where
d ˜ ( n ˜ i j , n ˜ k j ) = 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α 1 α .
Let
D ˜ j ( w ) = i = 1 m D ˜ i j ( w ) = i = 1 m k = 1 m w j 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α 1 α ,
j = 1 , 2 , , n . Then D j ( w ) represents the deviation value of all alternatives to other alternatives for the attribute P j Z .
On the basis of the analysis above, to select the weight vector w which maximizes all deviation values for all the attributes, a non-linear programming model is constructed as follows:
( M 3 ) { max D ˜ ( w ) = j = 1 n i = 1 m k = 1 m w j 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α 1 α s . t . w j 0 , j = 1 , 2 , , n , j = 1 n w j 2 = 1
To solve the above model, we construct the Lagrange function:
L ( w , ξ ) = j = 1 n i = 1 m k = 1 m 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α 1 α w j + ξ 2 j = 1 n w j 2 1
where ξ is a real number, representing the Lagrange multiplier variable. Then we compute the partial derivatives of L and let:
L w j = i = 1 m k = 1 m 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α 1 α + ξ w j = 0 L ξ = 1 2 j = 1 n w j 2 1 = 0
By solving the above equations, to determining the attribute weights, an exact and simple formula can be obtained as follows:
w j * = i = 1 m k = 1 m ( 1 6 ( 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α ) ) 1 α j = 1 n [ i = 1 m k = 1 m ( 1 6 ( 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α ) ) 1 α ] 2
As the weights of the attributes should satisfy the normalization condition, so we obtain the normalized attribute weights:
w j = i = 1 m k = 1 m ( 1 6 ( 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α ) ) 1 α j = 1 n i = 1 m k = 1 m ( 1 6 ( 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α ) ) 1 α
Case II: Partly known information on attribute weights
However, there are some situations that the information about the weight vector is partially known. For such situations, using the set of the known weight information, ℑ, the constrained optimization model can be designed as:
( M 4 ) { max D ˜ ( w ) = j = 1 n i = 1 m k = 1 m w j 1 6 1 # t ˜ ς = 1 # t ˜ γ ˜ i j σ ˜ ( ς ) L γ ˜ k j σ ˜ ( ς ) L α + γ ˜ i j σ ˜ ( ς ) U γ ˜ k j σ ˜ ( ς ) U α + 1 # i ˜ ς = 1 # i ˜ δ ˜ i j σ ˜ ( ς ) L δ ˜ k j σ ˜ ( ς ) L α + δ ˜ i j σ ˜ ( ς ) U δ ˜ k j σ ˜ ( ς ) U α + 1 # f ˜ ς = 1 # f ˜ η ˜ i j σ ˜ ( ς ) L η ˜ k j σ ˜ ( ς ) L α + η ˜ i j σ ˜ ( ς ) U η ˜ k j σ ˜ ( ς ) U α 1 α s . t . w , w j 0 , j = 1 , 2 , , n , j = 1 n w j = 1
where ℑ is also a set of constraint conditions that the weight value w j should satisfy according to the requirements in real situations. By solving the linear programming model ( M 4 ) , we obtain the optimal solution w = ( w 1 , w 2 , , w n ) t , which can be used as the weight vector of attributes.
In interval neutrosophic hesitant fuzzy environment, the PIS A ˜ + , and the NIS A ˜ can be defined as follows:
A ˜ + = { n ˜ 1 + , n ˜ 2 + , , n ˜ n + } = { { { [ 1 , 1 ] } , { [ 0 , 0 ] } , { [ 0 , 0 ] } } , { { [ 1 , 1 ] } , { [ 0 , 0 ] } , { [ 0 , 0 ] } } , , { { [ 1 , 1 ] } , { [ 0 , 0 ] } , { [ 0 , 0 ] } } } .
A ˜ = { n ˜ 1 , n ˜ 2 , , n ˜ n } = { { { [ 0 , 0 ] } , { [ 1 , 1 ] } , { [ 1 , 1 ] } } , { { [ 0 , 0 ] } , { [ 1 , 1 ] } , { [ 1 , 1 ] } } , , { { [ 0 , 0 ] } , { [ 1 , 1 ] } , { [ 1 , 1 ] } } } .
On the basis of Equation (14), Theorems 3 and 4, the separation measures d ˜ i + and d ˜ i of each alternative from the interval neutrosophic hesitant fuzzy PIS A ˜ + and the interval neutrosophic hesitant fuzzy NIS A ˜ , respectively, are determined as:
(15) d ˜ i + = j = 1 n d ˜ ( n ˜ i j , n ˜ j + ) w j = j = 1 n d ˜ ( n ˜ i j , { { [ 1 , 1 ] } , { [ 0 , 0 ] } , { [ 0 , 0 ] } } ) w j (16) = j = 1 n w j 1 6 1 # t ˜ ij γ ˜ t ˜ ij 1 γ ˜ L α + 1 γ ˜ U α + 1 # i ˜ ij δ ˜ i ˜ ij ( δ ˜ L ) α + ( δ ˜ U ) α + 1 # f ˜ ij η ˜ f ˜ ij ( ( η ˜ L ) α + ( η ˜ U ) α ) 1 α ,
(17) d ˜ i = j = 1 n d ˜ ( n ˜ i j , n ˜ j ) w j = j = 1 n d ˜ ( n ˜ i j , { { [ 0 , 0 ] } , { [ 1 , 1 ] } , { [ 1 , 1 ] } } ) w j (18) = j = 1 n w j 1 6 1 # t ˜ ij γ ˜ t ˜ ij ( γ ˜ L ) α + ( γ ˜ U ) α + 1 # i ˜ ij δ ˜ i ˜ ij 1 δ ˜ L α + 1 δ ˜ U α + 1 # f ˜ ij η ˜ f ˜ ij 1 η ˜ L α + 1 η ˜ U α 1 α ,
where i = 1 , 2 , , m . The relative closeness coefficient of an alternative A ˜ i with respect to the PIS A ˜ + is defined as:
R C ( A ˜ i ) = d ˜ i d ˜ i + + d ˜ i
where 0 R C ( A ˜ i ) 1 , i = 1 , 2 , , m . The ranking orders of all alternatives can be determined according to the closeness coefficient C R ( A ˜ i ) and select the optimal one(s) from a set of appropriate alternatives.

3. An Illustrative Example

To examine the validity and feasibility of developed decision-making approach in this section, we give a smartphone accessories supplier selection problem in realistic scenario as follows: In the smartphone fields, the Chinese market is the immense one in the world and the competition of smartphone field is so fierce that several companies could not avoid the destiny of bankrupt. In the Chinese market, a firm, who does not want to be defeated must choose the excellent accessories suppliers to fit its supply requirements and technology strategies. A new smartphone design firm called “Hua Xin” incorporated company, who wants to choose a few accessories suppliers for guaranteeing the productive throughput. For simplicity, we assume only one kind of accessory known as Central Processing Unit (CPU), which is used as an essential part in smartphones. The firm determines five CPU suppliers (alternatives) A i ( i = 1 , 2 , , 5 ) through the analysis of their planned level of effort and the market investigation. The evaluation criteria are (1) P 1 : cost; (2) P 2 : technical ability; (3) P 3 : product performance; (4) P 4 : service performance. Because the uncertainty of the information, the evaluation information given by the three experts is expressed as SVNHFEs. The SVNHFDM is given in Table 1. The hierarchical structure of constructed decision-making problem is depicted in Figure 2.
Take ϖ = 0.5 , α = 2 , and we normalize the SVNHFDM by using Algorithm 1. The normalized SVNHFDM is given in Table 2.
Now to obtain the optimal accessory supplier, we use the developed method, which contains the following two cases:
Case 1: The information of the attribute weights is completely unknown, then the MADM approach related to accessory supplier selection includes the following steps:
Step 1:
On the basis of Equation (4), we get the optimal weight vector:
w = ( 0.2994 , 0.2367 , 0.2521 , 0.2118 ) T
Step 2:
Based on the decision matrix of Table 2, we get the normalization of the reference points A + and A as follows:
A + = { n 1 + , n 2 + , n 3 + , n 4 + } = { { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } , { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } , { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } , { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } } ,
A = { n 1 , n 2 , n 3 , n 4 } = { { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } , { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } , { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } , { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } } .
Step 3:
On the basis of Equations (10) and (12), we determine the geometric distances d i + = d ( A i , A + ) and d i = d ( A i , A ) for the alternative A i ( i = 1 , 2 , , 5 ) as shown in Table 3.
Step 4:
Use Equation (13) to determine the relative closeness of each alternative A i with respect to the single-valued neutrosophic hesitant fuzzy PIS A + :
R C ( A 1 ) = 0.5251 , R C ( A 2 ) = 0.4896 , R C ( A 3 ) = 0.5394 , R C ( A 4 ) = 0.5600 , R C ( A 5 ) = 0.5927 .
Step 5:
On the basis of the relative closeness coefficients R C ( A i ) , rank the alternatives A i ( i = 1 , 2 , , 5 ) : A 5 A 4 A 3 A 1 A 2 . Thus, the optimal alternative (CPU supplier) is A 5 .
Case 2: The information of the attribute weights is partly known, and the known weight information is as follows:
= { 0.15 w 1 0.2 , 0.16 w 2 0.18 , 0.3 w 3 0.35 , 0.3 w 4 0.45 , j = 1 4 w j = 1 }
Step 1:
Use the model (M-2) to establish the single-objective programming model as follows:
( M 2 ) max D ( w ) = 5.6368 w 1 + 4.4554 w 2 + 4.7465 w 3 + 3.9864 w 4 s . t . w , w j 0 , j = 1 , 2 , 3 , 4 , j = 1 4 w j = 1
By solving this model, we obtain the attributes weight vector:
w = ( 0.2000 , 0.1600 , 0.3400 , 0.3000 ) T
Step 2:
According to the decision matrix of Table 2, the normalization of the reference points A + and A can be obtained as follows:
A + = { n 1 + , n 2 + , n 3 + , n 4 + } = { { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } , { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } , { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } , { { 1 , 1 } , { 0 , 0 } , { 0 , 0 , 0 } } } ,
A = { n 1 , n 2 , n 3 , n 4 } = { { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } , { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } , { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } , { { 0 , 0 } , { 1 , 1 } , { 1 , 1 , 1 } } } .
Step 3:
Based on Equations (10) and (12), we determine the geometric distances d ( A i , A + ) and d ( A i , A ) for the alternative A i ( i = 1 , 2 , , 5 ) as shown in Table 4.
Step 4:
Use Equation (13) to determine the relative closeness of each alternative A i with respect to the single-valued neutrosophic hesitant fuzzy PIS A + :
R C ( A 1 ) = 0.4972 , R C ( A 2 ) = 0.5052 , R C ( A 3 ) = 0.5199 , R C ( A 4 ) = 0.5808 , R C ( A 5 ) = 0.5883 .
Step 5:
Based on the relative closeness coefficients R C ( A i ) , rank the alternatives A i ( i = 1 , 2 , , 5 ) : A 5 A 4 A 3 A 2 A 1 . Thus, the optimal alternative (CPU supplier) is A 5 .
Taking ϖ = 0.5 , we normalize the single-valued neutrosophic hesitant fuzzy decision matrix and compute the closeness coefficient of the alternatives with the different values of α . The comparison results are given in Figure 3.
The analysis process under interval neutrosophic hesitant fuzzy circumstances:
In the above smartphone accessories supplier selection problem, if the information provided by the experts is indicated in INHFEs, as in Table 5. Then, to choose the optimal CPU supplier, we proceed to use the developed approach.
Take ϖ = 0.5 , α = 2 , and we normalize the INHFDM by using Algorithm 2. The normalized INHFDM is given in Table 6.
Case 1: The information of the attribute weights is completely unknown, then the MADM method of accessory supplier selection consists of the following steps:
Step 1:
On the basis of Equation (14), we get the optimal weight vector:
w = { 0.2963 , 0.2562 , 0.2388 , 0.2087 }
Step 2:
According to the decision matrix of Table 6, the normalization of the reference points A ˜ + and A ˜ can be obtained as follows:
A ˜ + = { n ˜ 1 + , n ˜ 2 + , n ˜ 3 + , n ˜ 4 + } = { { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } , { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } , { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } , { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } } ,
A ˜ = { n ˜ 1 , n ˜ 2 , n ˜ 3 , n ˜ 4 } = { { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } , { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } , { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } , { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } } .
Step 3:
Based on Equations (15) and (17), we determine the geometric distances d ˜ ( A i , A ) and d ˜ ( A i , A + ) for the alternative A i ( i = 1 , 2 , , 5 ) as shown in Table 7.
Step 4:
Use Equation (19) to determine the relative closeness of each alternative A ˜ i with respect to the interval neutrosophic hesitant fuzzy PIS A ˜ + :
R C ( A ˜ 1 ) = 0.5169 , R C ( A ˜ 2 ) = 0.4592 , R C ( A ˜ 3 ) = 0.4969 , R C ( A ˜ 4 ) = 0.5368 , R C ( A ˜ 5 ) = 0.5643 .
Step 5:
Based on the relative closeness coefficients R C ( A ˜ i ) , rank the alternatives A i ( i = 1 , 2 , , 5 ) : A 5 A 4 A 1 A 3 A 2 . Thus, the optimal alternative (CPU supplier) is A 5 .
Case 2: The information of the attribute weights is partly known, and the known weight information is given as follows:
= { 0.15 w 1 0.2 , 0.16 w 2 0.18 , 0.3 w 3 0.35 , 0.3 w 4 0.45 , j = 1 4 w j = 1 }
Step 1:
Use the model (M-4) to establish the single-objective programming model as follows: ( M 4 ) max D ( w ) = 4.5556 w 1 + 4.2000 w 2 + 3.3222 w 3 + 3.3111 w 4 s . t . w , w j 0 , j = 1 , 2 , 3 , 4 , j = 1 4 w j = 1
By solving this model, we obtain the weight vector of attributes:
w = { 0.2000 , 0.1800 , 0.3200 , 0.3000 }
Step 2:
According to the decision matrix of Table 6, we can obtain the normalization of the reference points A ˜ + and A ˜ as follows:
A ˜ + = { n ˜ 1 + , n ˜ 2 + , n ˜ 3 + , n ˜ 4 + } = { { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } , { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } , { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } , { { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 0 , 0 ] , [ 0 , 0 ] , [ 0 , 0 ] } } } ,
A ˜ = { n ˜ 1 , n ˜ 2 , n ˜ 3 , n ˜ 4 } = { { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } , { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } , { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } , { { [ 0 , 0 ] , [ 0 , 0 ] } , { [ 1 , 1 ] , [ 1 , 1 ] } , { [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] } } } .
Step 3:
Use Equations (15) and (17) to determine the geometric distances d ˜ ( A i , A + ) and d ˜ ( A i , A ) for the alternative A i ( i = 1 , 2 , , 5 ) as shown in Table 8.
Step 4:
Use Equation (19) to determine the relative closeness of each alternative A ˜ i with respect to the interval neutrosophic hesitant fuzzy PIS A ˜ + :
R C ( A ˜ 1 ) = 0.4955 , R C ( A ˜ 2 ) = 0.4729 , R C ( A ˜ 3 ) = 0.4803 , R C ( A ˜ 4 ) = 0.5536 , R C ( A ˜ 5 ) = 0.5607 .
Step 5:
According to the relative closeness coefficients R C ( A ˜ i ) , rank the alternatives A i ( i = 1 , 2 , , 5 ) : A 5 A 4 A 1 A 3 A 2 . Thus, the optimal alternative (CPU supplier) is A 5 .
Taking ϖ = 0.5 , we normalize the interval neutrosophic hesitant fuzzy decision matrix and compute the closeness coefficient of the alternatives with the different values of α . The comparison results are given in Figure 4.

Comparative Analysis

Zhao et al. [31] generalized the minimum deviation method to accommodate hesitant fuzzy values for solving the decision-making problems. We have used this approach on the above illustrative example and compared the decision results with proposed approach of this paper for SNHFSs. In the approach of Zhao et al., assume that the subjective preference values to all the alternatives A j ( j = 1 , 2 , 3 , 4 , 5 ) assigned by the experts are: s 1 = { { 0.3 , 0.4 } , { 0.2 , 0.5 } , { 0.1 , 0.3 , 0.7 } } , s 2 = { { 0.2 , 0.7 } , { 0.1 , 0.9 } , { 0.3 , 0.6 } } , s 3 = { { 0.8 } , { 0.5 , 0.8 } , { 0.4 , 0.7 , 0.9 } } , s 4 = { { 0.1 , 0.4 } , { 0.6 } , { 0.5 , 0.7 , 0.8 } } and s 5 = { { 0.3 } , { 0.4 , 0.6 } , { 0.2 , 0.4 } } . Also s ˜ 1 = { { [ 0.3 , 0.5 ] , [ 0.4 , 0.6 ] } , { [ 0.2 , 0.3 ] , [ 0.5 , 0.7 ] } , { [ 0.1 , 0.2 ] , [ 0.3 , 0.4 ] , [ 0.7 , 0.9 ] } } , s ˜ 2 = { { [ 0.2 , 0.3 ] , [ 0.7 , 0.9 ] } , { [ 0.1 , 0.4 ] , [ 0.7 , 0.9 ] } , { [ 0.3 , 0.4 ] , [ 0.6 , 0.8 ] } } , s ˜ 3 = { { [ 0.8 , 0.9 ] } , { [ 0.5 , 0.6 ] , [ 0.8 , 0.9 ] } , { [ 0.4 , 0.6 ] , [ 0.7 , 0.9 ] , [ 0.6 , 0.7 ] } } , s ˜ 4 = { { [ 0.1 , 0.4 ] , [ 0.4 , 0.5 ] } , { [ 0.6 , 0.7 ] } , { [ 0.5 , 0.7 ] , [ 0.7 , 0.8 ] , [ 0.8 , 0.9 ] } } and s ˜ 5 = { { [ 0.3 , 0.5 ] } , { [ 0.4 , 0.5 ] , [ 0.6 , 0.8 ] } , { [ 0.2 , 0.3 ] , [ 0.4 , 0.7 ] } } .
The results corresponding to these approaches are summarized in Table 9.
From this comparative study, the results obtained by the approach [31] coincide with the proposed one which validates the proposed approach. The main reason is that in approach [31], the subjective preferences are taken into account to serve as decision information and will have a positive effect on the final decision results. Hence, the proposed approach can be suitably used to solve the MADM problems. The advantages of our proposed method are as follows: (1) The developed approach has good flexibility and extension. (2) The SNHFSs of developed approach availably depicts increasingly general decision-making situations. (3) With the aid of the maximizing deviation and TOPSIS, the developed approach uses the satisfaction level of the alternative to the ideal solutions to make the decision.

4. Conclusions

SNHFS is a suitable tool for dealing with the obscurity of an expert’s judgments over alternatives according to attributes. SNHFSs are useful for representing the hesitant assessments of the experts, and remains the edge of SNSs and HFSs, which accommodates an increasingly complex MADM situation. SNHFS (by combining SNS and HFS) as an extended format represents some general hesitant scenarios. In this paper, firstly we have developed the normalization method and the distance measures of SNHFSs and further, to obtain the attribute optimal relative weights, we have proposed a decision-making approach called the maximizing deviation method with SNHFSs including SVNHFSs and INHFSs. Secondly, we have developed a new approach based on TOPSIS to solve MADM problems under SNHFS environment (SVNHFS and INHFS). Finally, we have illustrated the applicability and effectiveness of the developed method with a smartphone accessories supplier selection problem. In future work, we will extend the proposed approach of SNHFSs to other areas, such as pattern recognition, medical diagnosis, clustering analysis, and image processing.

Author Contributions

M.A. and S.N. developed the theory and performed the computations. F.S. verified the analytical methods.

Conflicts of Interest

The authors declare that they have no conflict of interest regarding the publication of the research article.

References

  1. Smarandache, F. A Unifying Field in Logics. Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Rehoboth, DE, USA, 1999. [Google Scholar]
  2. Smarandache, F. Neutrosophy. Neutrosophic Probability, Set, and Logic, ProQuest Information & Learning; American Research Press: Ann Arbor, MI, USA, 1998; Volume 105, pp. 118–123. [Google Scholar]
  3. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single-valued neutrosophic sets. Multispace Multistruct. 2010, 4, 410–413. [Google Scholar]
  4. Ye, J. A multi-criteria decision making method using aggregation operators for simplified neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 2459–2466. [Google Scholar]
  5. Torra, V.; Narukawa, Y. On hesitant fuzzy sets and decision. In Proceedings of the 18th IEEE International Conference on Fuzzy Systems, Jeju Island, Korea, 20–24 August 2009; pp. 1378–1382. [Google Scholar]
  6. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  7. Ye, J. Multiple-attribute decision making method under a single valued neutrosophic hesitant fuzzy environment. J. Intell. Syst. 2015, 24, 23–36. [Google Scholar] [CrossRef]
  8. Liu, C.F.; Luo, Y.S. New aggregation operators of single-valued neutrosophic hesitant fuzzy set and their application in multi-attribute decision making. Pattern Anal. Appl. 2019, 22, 417–427. [Google Scholar] [CrossRef]
  9. Sahin, R.; Liu, P. Correlation coefficient of single-valued neutrosophic hesitant fuzzy sets and its applications in decision making. Neural Comput. Appl. 2017, 28, 1387–1395. [Google Scholar] [CrossRef]
  10. Li, X.; Zhang, X. Single-valued neutrosophic hesitant fuzzy Choquet aggregation operators for multi-attribute decision making. Symmetry 2018, 10, 50. [Google Scholar] [CrossRef]
  11. Juan-juan, P.; Jian-qiang, W.; Jun-hua, H. Multi-criteria decision making approach based on single-valued neutrosophic hesitant fuzzy geometric weighted choquet integral heronian mean operator. J. Intell. Fuzzy Syst. 2018, 1–14. [Google Scholar] [CrossRef]
  12. Wang, R.; Li, Y. Generalized single-valued neutrosophic hesitant fuzzy prioritized aggregation operators and their applications to multiple criteria decision making. Information 2018, 9, 10. [Google Scholar] [CrossRef]
  13. Akram, M.; Adeel, A.; Alcantud, J.C.R. Group decision making methods based on hesitant N-soft sets. Expert Syst. Appl. 2019, 115, 95–105. [Google Scholar] [CrossRef]
  14. Akram, M.; Adeel, A. TOPSIS approach for MAGDM based on interval-valued hesitant fuzzy N-soft environment. Int. J. Fuzzy Syst. 2019, 21, 993–1009. [Google Scholar] [CrossRef]
  15. Akram, M.; Adeel, A.; Alcantud, J.C.R. Hesitant Fuzzy N-Soft Sets: A New Model with Applications in Decision-Making. J. Intell. Fuzzy Syst. 2019. [Google Scholar] [CrossRef]
  16. Akram, M.; Naz, S. A Novel Decision-Making Approach under Complex Pythagorean Fuzzy Environment. Math. Comput. Appl. 2019, 24, 73. [Google Scholar] [CrossRef]
  17. Naz, S.; Ashraf, S.; Akram, M. A novel approach to decision making with Pythagorean fuzzy information. Mathematics 2018, 6, 95. [Google Scholar] [CrossRef]
  18. Naz, S.; Ashraf, S.; Karaaslan, F. Energy of a bipolar fuzzy graph and its application in decision making. Italian J. Pure Appl. Math. 2018, 40, 339–352. [Google Scholar]
  19. Naz, S.; Akram, M. Novel decision making approach based on hesitant fuzzy sets and graph theory. Comput. Appl. Math. 2018. [Google Scholar] [CrossRef]
  20. Liu, P.; Shi, L. The generalized hybrid weighted average operator based on interval neutrosophic hesitant set and its application to multiple attribute decision making. Neural Comput. Appl. 2015, 26, 457–471. [Google Scholar] [CrossRef]
  21. Ye, J. Correlation coefficients of interval neutrosophic hesitant fuzzy sets and its application in a multiple attribute decision making method. Informatica 2016, 27, 179–202. [Google Scholar] [CrossRef]
  22. Kakati, P.; Borkotokey, S.; Mesiar, R.; Rahman, S. Interval neutrosophic hesitant fuzzy Choquet integral in multi-criteria decision making. J. Intell. Fuzzy Syst. 2018, 1–19. [Google Scholar] [CrossRef]
  23. Mahmood, T.; Ye, J.; Khan, Q. Vector similarity measures for simplified neutrosophic hesitant fuzzy set and their applications. J. Inequal. Spec. Funct. 2016, 7, 176–194. [Google Scholar]
  24. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  25. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  26. Xu, Z. Some similarity measures of intuitionistic fuzzy sets and their applications to multiple attribute decision making. Fuzzy Optim. Decis. Mak. 2007, 6, 109–121. [Google Scholar] [CrossRef]
  27. Xu, Z.; Zhang, X. Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information. Knowl.-Based Syst. 2013, 52, 53–64. [Google Scholar] [CrossRef]
  28. Wei, C.; Ren, Z.; Rodriguez, R.M. A hesitant fuzzy linguistic TODIM method based on a score function. Int. J. Comput. Intell. Syst. 2015, 8, 701–712. [Google Scholar] [CrossRef]
  29. Liao, H.; Xu, Z.; Zeng, X.J. Hesitant fuzzy linguistic VIKOR method and its application in qualitative multiple criteria decision making. IEEE Trans. Fuzzy Syst. 2015, 23, 1343–1355. [Google Scholar] [CrossRef]
  30. Gou, X.J.; Liao, H.C.; Xu, Z.S.; Herrera, F. Double hierarchy hesitant fuzzy linguistic term set and MULTIMOORA method: A case of study to evaluate the implementation status of haze controlling measures. Inf. Fusion 2017, 38, 22–34. [Google Scholar] [CrossRef]
  31. Zhao, H.; Xu, Z.; Wang, H.; Liu, S. Hesitant fuzzy multi-attribute decision making based on the minimum deviation method. Soft Comput. 2017, 21, 3439–3459. [Google Scholar] [CrossRef]
Figure 1. The scheme of the developed approach for MADM.
Figure 1. The scheme of the developed approach for MADM.
Symmetry 11 01058 g001
Figure 2. The smartphone accessories supplier selection hierarchical structure.
Figure 2. The smartphone accessories supplier selection hierarchical structure.
Symmetry 11 01058 g002
Figure 3. Comparison of the closeness coefficient of the alternative.
Figure 3. Comparison of the closeness coefficient of the alternative.
Symmetry 11 01058 g003
Figure 4. Comparison of the closeness coefficient of the alternative.
Figure 4. Comparison of the closeness coefficient of the alternative.
Symmetry 11 01058 g004
Table 1. Single-valued neutrosophic hesitant fuzzy decision matrix.
Table 1. Single-valued neutrosophic hesitant fuzzy decision matrix.
P 1 P 2
A 1 {{0.2},{0.3,0.5},{0.1,0.2,0.3}}{{0.6,0.7},{0.1,0.3},{0.2,0.4}}
A 2 {{0.1},{0.3},{0.5,0.6}}{{0.4},{0.3,0.5},{0.5,0.6}}
A 3 {{0.6,0.7},{0.2,0.3},{0.1,0.2}}{{0.1,0.2},{0.3},{0.6,0.7}}
A 4 {{0.2,0.3},{0.1,0.2},{0.5,0.6}}{{0.3,0.4},{0.2,0.3},{0.5,0.6,0.7}}
A 5 {{0.7},{0.4,0.5},{0.2,0.4,0.5}}{{0.6},{0.1,0.7},{0.3,0.5}}
P 3 P 4
A 1 {{0.2,0.3},{0.4},{0.7,0.8}}{{0.4},{0.1,0.3},{0.5,0.7,0.9}}
A 2 {{0.1,0.3},{0.4},{0.5,0.6,0.8}}{{0.6,0.8},{0.2},{0.3,0.5}}
A 3 {{0.2,0.3},{0.1,0.2},{0.6,0.7}}{{0.2,0.3},{0.4},{0.2,0.5,0.6}}
A 4 {{0.2,0.4},{0.3},{0.1,0.2}}{{0.6},{0.2},{0.3,0.5}}
A 5 {{0.3},{0.5},{0.1,0.4}}{{0.5},{0.1,0.2},{0.3,0.4}}
Table 2. Normalized single-valued neutrosophic hesitant fuzzy decision matrix.
Table 2. Normalized single-valued neutrosophic hesitant fuzzy decision matrix.
P 1 P 2
A 1 {{0.2,0.2},{0.3,0.5},{0.1,0.2,0.3}}{{0.6,0.7},{0.1,0.3},{0.2,0.3,0.4}}
A 2 {{0.1,0.1},{0.3,0.3},{0.5,0.55,0.6}}{{0.4,0.4},{0.3,0.5},{0.5,0.55,0.6}}
A 3 {{0.6,0.7},{0.2,0.3},{0.1,0.15,0.2}}{{0.1,0.2},{0.3,0.3},{0.6,0.65,0.7}}
A 4 {{0.2,0.3},{0.1,0.2},{0.5,0.55,0.6}}{{0.3,0.4},{0.2,0.3},{0.5,0.6,0.7}}
A 5 {{0.7,0.7},{0.4,0.5},{0.2,0.4,0.5}}{{0.6,0.6},{0.1,0.7},{0.3,0.4,0.5}}
P 3 P 4
A 1 {{0.2,0.3},{0.4,0.4},{0.7,0.75,0.8}}{{0.4,0.4},{0.1,0.3},{0.5,0.7,0.9}}
A 2 {{0.1,0.3},{0.4,0.4},{0.5,0.6,0.8}}{{0.6,0.8},{0.2,0.2},{0.3,0.4,0.5}}
A 3 {{0.2,0.3},{0.1,0.2},{0.6,0.65,0.7}}{{0.2,0.3},{0.4,0.4},{0.2,0.5,0.6}}
A 4 {{0.2,0.4},{0.3,0.3},{0.1,0.15,0.2}}{{0.6,0.6},{0.2,0.2},{0.3,0.4,0.5}}
A 5 {{0.3,0.3},{0.5,0.5},{0.1,0.25,0.4}}{{0.5,0.5},{0.1,0.2},{0.3,0.35,0.4}}
Table 3. The geometric distances for alternatives.
Table 3. The geometric distances for alternatives.
Geometric Distance A 1 A 2 A 3 A 4 A 5
d i + = d ( A i , A + ) 0.51420.54340.49740.47810.4279
d i = d ( A i , A ) 0.56850.52120.58240.60860.6226
Table 4. The geometric distances for alternatives.
Table 4. The geometric distances for alternatives.
Geometric Distance A 1 A 2 A 3 A 4 A 5
d ( A i , A + ) 0.54460.52440.52200.45340.4341
d ( A i , A ) 0.53850.53550.56520.62810.6202
Table 5. Interval neutrosophic hesitant fuzzy decision matrix.
Table 5. Interval neutrosophic hesitant fuzzy decision matrix.
P 1 P 2
A 1 {{[0.2,0.3]},{[0.3,0.4],[0.5,0.7]},{[0.1,0.3],[0.2,0.5],[0.3,0.6]}}{{[0.6,0.8],[0.7,0.9]},{[0.1,0.2],[0.3,0.5]},{[0.2,0.3],[0.4,0.5]}}
A 2 {{[0.1,0.3]},{[0.3,0.5]},{[0.5,0.7],[0.6,0.8]}}{{[0.4,0.6]},{[0.3,0.4],[0.5,0.6]},{[0.5,0.7],[0.6,0.8]}}
A 3 {{[0.6,0.7],[0.7,0.8]},{[0.2,0.4],[0.3,0.5]},{[0.1,0.3],[0.2,0.4]}}{{[0.1,0.3],[0.2,0.4]},{[0.3,0.6]},{[0.6,0.8],[0.7,0.9]}}
A 4 {{[0.2,0.5],[0.3,0.4]},{[0.1,0.3],[0.2,0.3]},{[0.5,0.6],[0.6,0.7]}}{{[0.3,0.5],[0.4,0.6]},{[0.2,0.3],[0.3,0.4]},{[0.5,0.7],[0.6,0.8],[0.7,0.9]}}
A 5 {{[0.7,0.8]},{[0.4,0.6],[0.5,0.7]},{[0.2,0.3],[0.4,0.6],[0.5,0.7]}}{{[0.6,0.8]},{[0.1,0.3],[0.7,0.8]},{[0.3,0.4],[0.5,0.6}}
P 3 P 4
A 1 {{[0.2,0.4],[0.3,0.5]},{[0.4,0.5]},{[0.7,0.8],[0.8,0.9]}}{{[0.4,0.6]},{[0.1,0.2],[0.3,0.4]},{[0.5,0.6],[0.7,0.8],[0.8,0.9]}}
A 2 {{[0.1,0.3],[0.3,0.5]},{[0.4,0.6]},{[0.5,0.6],[0.6,0.7],[0.8,0.9]}}{{[0.6,0.7],[0.8,0.9]},{[0.2,0.5]},{[0.3,0.5],[0.5,0.7]}}
A 3 {{[0.2,0.3],[0.3,0.4]},{[0.1,0.3],[0.2,0.4]},{[0.6,0.8],[0.7,0.9]}}{{[0.2,0.4],[0.3,0.5]},{[0.4,0.6]},{[0.2,0.3],[0.5,0.7],[0.6,0.8]}}
A 4 {{[0.2,0.3],[0.4,0.5]},{[0.3,0.6]},{[0.1,0.4],[0.2,0.5]}}{{[0.6,0.8]},{[0.2,0.3]},{[0.3,0.4],[0.5,0.6]}}
A 5 {{[0.3,0.5]},{[0.5,0.6]},{[0.1,0.3],[0.4,0.5]}}{{[0.5,0.7]},{[0.1,0.3],[0.2,0.5]},{[0.3,0.5],[0.4,0.8]}}
Table 6. Normalized interval neutrosophic hesitant fuzzy decision matrix.
Table 6. Normalized interval neutrosophic hesitant fuzzy decision matrix.
P 1 P 2
A 1 {{[0.2,0.3],[0.2,0.3]},{[0.3,0.4],[0.5,0.7]},{[0.1,0.3],[0.2,0.5],[0.3,0.6]}}{{[0.6,0.8],[0.7,0.9]},{[0.1,0.2],[0.3,0.5]},{[0.2,0.3],[0.3,0.4],[0.4,0.5]}}
A 2 {{[0.1,0.3],[0.1,0.3]},{[0.3,0.5],[0.3,0.5]},{[0.5,0.7],[0.55,0.75],[0.6,0.8]}}{{[0.4,0.6],[0.4,0.6]},{[0.3,0.4],[0.5,0.6]},{[0.5,0.7],[0.55,0.75],[0.6,0.8]}}
A 3 {{[0.6,0.7],[0.7,0.8]},{[0.2,0.4],[0.3,0.5]},{[0.1,0.3],[0.15,0.35],[0.2,0.4]}}{{[0.1,0.3],[0.2,0.4]},{[0.3,0.6],[0.3,0.6]},{[0.6,0.8],[0.65,0.85],[0.7,0.9]}}
A 4 {{[0.2,0.5],[0.3,0.4]},{[0.1,0.3],[0.2,0.3]},{[0.5,0.6],[0.55,0.65],[0.6,0.7]}}{{[0.3,0.5],[0.4,0.6]},{[0.2,0.3],[0.3,0.4]},{[0.5,0.7],[0.6,0.8],[0.7,0.9]}}
A 5 {{[0.7,0.8],[0.7,0.8]},{[0.4,0.6],[0.5,0.7]},{[0.2,0.3],[0.4,0.6],[0.5,0.7]}}{{[0.6,0.8],[0.6,0.8]},{[0.1,0.3],[0.7,0.8]},{[0.3,0.4],[0.4,0.5],[0.5,0.6}}
P 3 P 4
A 1 {{[0.2,0.4],[0.3,0.5]},{[0.4,0.5],[0.4,0.5]},{[0.7,0.8],[0.75,0.85],[0.8,0.9]}}{{[0.4,0.6],[0.4,0.6]},{[0.1,0.2],[0.3,0.4]},{[0.5,0.6],[0.7,0.8],[0.8,0.9]}}
A 2 {{[0.1,0.3],[0.3,0.5]},{[0.4,0.6],[0.4,0.6]},{[0.5,0.6],[0.6,0.7],[0.8,0.9]}}{{[0.6,0.7],[0.8,0.9]},{[0.2,0.5],[0.2,0.5]},{[0.3,0.5],[0.4,0.6],[0.5,0.7]}}
A 3 {{[0.2,0.3],[0.3,0.4]},{[0.1,0.3],[0.2,0.4]},{[0.6,0.8],[0.65,0.85],[0.7,0.9]}}{{[0.2,0.4],[0.3,0.5]},{[0.4,0.6],[0.4,0.6]},{[0.2,0.3],[0.5,0.7],[0.6,0.8]}}
A 4 {{[0.2,0.3],[0.4,0.5]},{[0.3,0.6],[0.3,0.6]},{[0.1,0.4],[0.15,0.45],[0.2,0.5]}}{{[0.6,0.8],[0.6,0.8]},{[0.2,0.3],[0.2,0.3]},{[0.3,0.4],[0.4,0.5],[0.5,0.6]}}
A 5 {{[0.3,0.5],[0.3,0.5]},{[0.5,0.6],[0.5,0.6]},{[0.1,0.3],[0.25,0.4],[0.4,0.5]}}{{[0.5,0.7],[0.5,0.7]},{[0.1,0.3],[0.2,0.5]},{[0.3,0.5],[0.35,0.65],[0.4,0.8]}}
Table 7. The geometric distances for alternatives.
Table 7. The geometric distances for alternatives.
Geometric Distance A 1 A 2 A 3 A 4 A 5
d ˜ ( A i , A + ) 0.51690.57110.53610.49520.4625
d ˜ ( A i , A ) 0.55310.48490.52950.57400.5991
Table 8. The geometric distances for alternatives.
Table 8. The geometric distances for alternatives.
Geometric Distance A 1 A 2 A 3 A 4 A 5
d ˜ ( A i , A + ) 0.54060.55620.55690.47520.4653
d ˜ ( A i , A ) 0.53100.49900.51470.58940.5938
Table 9. Comparative analysis.
Table 9. Comparative analysis.
MethodsScore of AlternativesRanking of Alternatives
Zhao et al. [31] for SVNHFS0.4431  0.4025  0.4941  0.5073  0.5691 A 5 A 4 A 3 A 1 A 2
Our proposed method for SVNHFS0.5251  0.4896  0.5394  0.5600  0.5927 A 5 A 4 A 3 A 1 A 2
Zhao et al. [31] for INHFS0.4559  0.4206  0.4255  0.5334  0.5791 A 5 A 4 A 1 A 3 A 2
Our proposed method for INHFS0.5169  0.4592  0.4969  0.5368  0.5643 A 5 A 4 A 1 A 3 A 2

Share and Cite

MDPI and ACS Style

Akram, M.; Naz, S.; Smarandache, F. Generalization of Maximizing Deviation and TOPSIS Method for MADM in Simplified Neutrosophic Hesitant Fuzzy Environment. Symmetry 2019, 11, 1058. https://doi.org/10.3390/sym11081058

AMA Style

Akram M, Naz S, Smarandache F. Generalization of Maximizing Deviation and TOPSIS Method for MADM in Simplified Neutrosophic Hesitant Fuzzy Environment. Symmetry. 2019; 11(8):1058. https://doi.org/10.3390/sym11081058

Chicago/Turabian Style

Akram, Muhammad, Sumera Naz, and Florentin Smarandache. 2019. "Generalization of Maximizing Deviation and TOPSIS Method for MADM in Simplified Neutrosophic Hesitant Fuzzy Environment" Symmetry 11, no. 8: 1058. https://doi.org/10.3390/sym11081058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop