Next Article in Journal
On Coloring Catalan Number Distance Graphs and Interference Graphs
Next Article in Special Issue
Some q-Rung Dual Hesitant Fuzzy Heronian Mean Operators with Their Application to Multiple Attribute Group Decision-Making
Previous Article in Journal
An Adversarial and Densely Dilated Network for Connectomes Segmentation
Previous Article in Special Issue
Two Types of Intuitionistic Fuzzy Covering Rough Sets and an Application to Multiple Criteria Group Decision Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Similarity Measures of Single-Valued Neutrosophic Multisets Based on the Decomposition Theorem and Its Application in Medical Diagnosis

1
School of Arts and Sciences, Shaanxi University of Science & Technology, Xi’an 710021, China
2
College of Arts and Sciences, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(10), 466; https://doi.org/10.3390/sym10100466
Submission received: 16 September 2018 / Revised: 1 October 2018 / Accepted: 2 October 2018 / Published: 9 October 2018
(This article belongs to the Special Issue Fuzzy Techniques for Decision Making 2018)

Abstract

:
Cut sets, decomposition theorem and representation theorem have a great influence on the realization of the transformation of fuzzy sets and classical sets, and the single-valued neutrosophic multisets (SVNMSs) as the generalization of fuzzy sets, which cut sets, decomposition theorem and representation theorem have the similar effects, so they need to be studied in depth. In this paper, the decomposition theorem, representation theorem and the application of a new similarity measures of SVNMSs are studied by using theoretical analysis and calculations. The following are the main results: (1) The notions, operation and operational properties of the cut sets and strong cut sets of SVNMSs are introduced and discussed; (2) The decomposition theorem and representation theorem of SVNMSs are established and rigorously proved. The decomposition theorem and the representation theorem of SVNMSs are the theoretical basis for the development of SVNMSs. The decomposition theorem provides a new idea for solving the problem of SVNMSs, and points out the direction for the principle of expansion of SVNMSs. (3) Based on the decomposition theorem and representation theorem of SVNMSs, a new notion of similarity measure of SVNMSs is proposed by applying triple integral. And this new similarity is applied to the practical problem of multicriteria decision-making, which explains the efficacy and practicability of this decision-making method. The new similarity is not only a way to solve the problem of multi-attribute decision-making, but also contains an important mathematical idea, that is, the idea of transformation.

1. Introduction

It is essential for medical experts to address incomplete and uncertain information included in actual medical diagnostic questions. In order to effectively use various uncertain diagnostic information, Smarandache [1] proposed neutrosophic set (NS), which is a generalization of fuzzy set (FS) and intuitionistic fuzzy set (IFS) [2]. NS is more flexible and applicable than FS and IFS. Nevertheless, it is hard to apply the NS to practical problems for the values of the functions with respect to truth, indeterminacy and falsity lie in ]0−,1+ [. Thus, Smarandache and Wang [3] introduced the notion of the single-valued neutrosophic set (SVNS), whose values belong to [0,1]. In the actual decision-making problems, scholars have obtained many inspiring research results according to the SVNS theories [4,5,6,7,8,9]. However, in the multicriteria decision-making problem, the application of SVNS has certain limitations. Fortunately, Yager [10] firstly discussed fuzzy multisets (FMSs), in which every element may appear more than once and may have the same or different membership values. Indeed, fuzzy multisets theories cannot cope with all types of uncertain and incomplete information. So, Ye [11] introduced the notion of the single-valued neutrosophic multisets (SVNMSs) by capitalizing on fuzzy multisets (FMs) [12,13]. So far, a large number of scholars have studied the similarity measures of SVNMSs from different angles and discuss its application in decision-making problems in [11,14,15,16,17,18], which is crucial for further in-depth analysis and research on SVNMSs in the future.
As we all know, the decomposition theorem, representation theorem and expansion theorem are three theoretical pillars of fuzzy mathematics. Decomposition theorem and representation theorem are the bond between fuzzy set theory and classical set theory, that is, any fuzzy set problem can be turned into a problem of classical set by taking a cut set and constructing a geometric set. The notion of λ-cut sets of FS, some basic properties of λ-cut sets, the decomposition theorem, the representation theorem of FSs had been proposed [1,19]. What is more, the definitions of cut sets, some basic properties of cut sets, the decomposition theorem and the representation theorem of IFS, interval intuitionistic fuzzy set (IIFS), interval value fuzzy set (IVFS) which as generations of FSs had been proposed [20,21,22,23,24,25,26,27,28]. After that D. Singh, A. J. Alkali and A. I. Isah introduced the definition of α-cuts for FMS, which is a generalization of λ-cut sets of FS, and proposed some properties of α-cuts, decomposition theorem for FMS [29]. However, the cut sets and its operational properties, decomposition theorem and representation theorem of the SVNMSs have not been studied yet. Thus, it is necessary to discuss the cut sets, decomposition theorem and representation theorem of SVNMSs. We have already been researching SVNMSs and proposed some new results in [30,31,32]. Moreover, this paper proposes a new similarity from the perspective of decomposition theorem which is different from [11,12,13,14,15,16]. This new method uses the decomposition theorem as the theoretical basis and the integral as the mathematical tool. The idea is simple, the calculation is convenient, and it contains important mathematical ideas, which is more practical [33,34,35,36].
The organization of this paper is as follows: In Section 2, some basic conceptions of FMS, IFM and SVNMS are reviewed. Section 3 discusses some new properties of SVNMS. Section 4 proposes the (α, β, γ)-cut sets for SVNMS, and investigates the decomposition theorem and the representation theorem of SVNMS. In Section 5, based on the established cut sets, a new method is proposed to calculate the similarity measure between SVNMSs. In Section 6, a practicable example is offered for medical diagnosis to illustrate the approach proposed in this paper. Section 7 presents final conclusions and further research.

2. Preliminaries

2.1. Some Basic Concepts of IFS, FMS

Definition 1
([2]). Let X be a nonempty set. An IFS M in X is given by
M = { x , μ M ( x ) , ν M ( x ) | x X }  
where μ M : X [ 0 , 1 ] and ν M : X [ 0 , 1 ] with the condition 0 μ M + ν M 1 for all x X .
Here μ M ( x ) , ν M ( x ) [ 0 , 1 ] denote the membership and the non-membership functions of the fuzzy set M .
Definition 2
([10]). A fuzzy multiset M is a generation set of multisets over the universe X , which is denoted by pairs, where the first part of each pair is the element of X , and the second part is the membership of the element relative to M . Note that an element of X may occur more than once in the same or different membership values. For each x X , a membership sequence is defined to be the decreasing ordered sequence of the elements, that is,
( μ M 1 ( x ) , μ M 2 ( x ) , , μ M q ( x ) ) ,
where μ M 1 ( x ) μ M 2 ( x ) μ M q ( x ) . Hence, the FMS M is given by
M = { ( μ M 1 ( x ) , μ M 2 ( x ) , , μ M q ( x ) ) | x } ,   for   all   x X .

2.2. Some Concepts of SVNMS

Definition 3
([11]). Let X be a nonempty set with a generic element in X denoted by x . A SVNMS M in X is characterized by three functions: count truth-membership of C T M , count indeterminacy-membership of C I M , and count falsity-membership of C F M , such that C T M ( x ) : X R , C I M ( x ) : X R , C F M ( x ) : X R , for every x X , where R is the set of all real number multisets in the real unit interval [ 0 , 1 ] . Then, a SVNMS M is given by
M = { x , ( T M 1 ( x ) , T M 2 ( x ) , , T M k ( x ) ) , ( I M 1 ( x ) , I M 2 ( x ) , , I M k ( x ) ) , ( F M 1 ( x ) , F M 2 ( x ) , , F M k ( x ) ) | x X } ,
where the truth-membership sequence ( T M 1 ( x ) , T M 2 ( x ) , , T M k ( x ) ) , the indeterminacy-membership sequence ( I M 1 ( x ) , I M 2 ( x ) , , I M k ( x ) ) , and the falsity-membership sequence ( F M 1 ( x ) , F M 2 ( x ) , , F M k ( x ) ) may be in decreasing order or not. Additionally, the T M j ( x ) , I M j ( x ) , F M j ( x ) also satisfies the following condition
0 T M j ( x ) + I M j ( x ) + F M j ( x ) 3 ,   for   all x X ,   j = 1 , 2 , , k .
In order to express more concisely, a SVNMS M over X can be given by
M = { x , T M j ( x ) , I M j ( x ) , F M j ( x ) | x X , j = 1 , 2 , , k }
Furthermore, we represent the set of all SVNMSs on X as SVNMS(X).
Definition 4
([11]). Let M S V N M S ( X ) , for every element x included in M , the length of x is defined as the cardinal number of C T M ( x ) or C I M ( x ) , or C F M ( x ) , and is expressed as l ( x : M ) . That is, l ( x : M ) = | C T M ( x ) | = | C I M ( x ) | = | C F M ( x ) | . Suppose M , N S V N M S ( X ) , then, l ( x : M , N ) = max { l ( x : M ) , l ( x : N ) } .
Definition 5
([11]). An absolute SVNMS M ˜ is a SVNMS, whose T M ˜ j ( x ) = 1 , I M ˜ j ( x ) = 0 and F M ˜ j ( x ) = 0 , for all x X and j = 1 , 2 , , l ( x : M ˜ ) .
Definition 6
([11]).A null SVNMS Φ ˜ is a SVNMS, whose T Φ ˜ j ( x ) = 0 , I Φ ˜ j ( x ) = 1 and F Φ ˜ j ( x ) = 1 , for all x X and j = 1 , 2 , , l ( x : Φ ˜ ) .
Let M , N S V N M S ( X ) . In order to further study the operations between M and N , we must verify that l ( x : M ) = l ( x : N ) is true for every x X , if not, we use a sufficient number of zeroes to fill the truth-membership values and a sufficient number of ones to fill the indeterminacy-membership values and falsity-membership values of the smaller-length sequences, respectively, so that the lengths of sequences are equal to facilitate computing.
Definition 7
([11]). Let M = { x , T M j ( x ) , I M j ( x ) , F M j ( x ) | x X , j = 1 , 2 , , l ( x : M ) } and N = { x , T N j ( x ) , I N j ( x ) , F N j ( x ) | x X , j = 1 , 2 , , l ( x : N ) } be two SVNMSs in X. Then, we have
  • (1) Inclusion: M N if and only if T M j ( x ) T N j ( x ) , I M j ( x ) I N j ( x ) , F M j ( x ) F N j ( x ) for j = 1 , 2 , , l ( x : M , N ) ;
  • (2) Equality: M = N if and only if M N and N M ;
  • (3) Complement: M c ˜ = { x , F M j ( x ) , 1 I M j ( x ) , T M j ( x ) | x X , j = 1 , 2 , , l ( x : M ) } ;
  • (4) Union: M N = { x , T M j ( x ) T N j ( x ) , I M j ( x ) I N j ( x ) , F M j ( x ) F N j ( x ) | x X , j = 1 , 2 , , l ( x : M , N ) } ;
  • (5) Intersection: M N = { x , T M j ( x ) T N j ( x ) , I M j ( x ) I N j ( x ) , F M j ( x ) F N j ( x ) | x X , j = 1 , 2 , , l ( x : M , N ) } ;
  • (6) Addition: M N = { x , T M j ( x ) + T N j ( x ) T M j ( x ) T N j ( x ) , I M j ( x ) I N j ( x ) , F M j ( x ) F N j ( x ) | x X , j = 1 , 2 , , l ( x : M , N ) } ;
  • (7) Multiplication: M N = { x , T M j ( x ) T N j ( x ) , I M j ( x ) + I N j ( x ) I M j ( x ) I N j ( x ) , F M j ( x ) + F N j ( x ) F M j ( x ) F N j ( x ) | x X , j = 1 , 2 , , l ( M , N ) } .
Definition 8
([14]). Let M = { x i , T M j ( x i ) , I M j ( x i ) , F M j ( x i ) | x i X ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x : M ) } and N = { x i , T N j ( x i ) , I N j ( x i ) , F N j ( x i ) | x i X ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x : N ) } be two SVNMSs in X = { x 1 , x 2 , , x n } . Now, we propose the generalized distance measure between M and N as follows:
D P ( M , N ) = [ 1 n i = 1 n 1 3 l i j = 1 l i ( | T M j ( x i ) T N j ( x i ) | P + | I M j ( x i ) I N j ( x i ) | P + | F M j ( x i ) F N j ( x i ) | P ) ] 1 P ,
where l i = l ( x i : M , N ) = max { l ( x i : M ) , l ( x i : N ) } for i = 1 , 2 , , n .
If P = 1 , 2 , it reduces to the Hamming distance and the Euclidean distance, which are usually applied to real science and engineering areas.
Based on the relationship between the distance measure and the similarity measure, we can introduce two distance-based similarity measures between M and N :
S 1 ( M , N ) = 1 D P ( M , N ) ,
S 2 ( M , N ) = 1 D P ( M , N ) 1 + D P ( M , N ) .

3. Some New Properties of SVNMS

The operation of SVNMS is discussed in depth and certain theoretical results are obtained. On this basis, this section generalizes the union and intersection operations of two SVNMSs to the general case, that is, for any indicator set. In addition, this section presents the arithmetic properties of SVNMSs.
Remark 1.
The union and intersection operations of the two SVNMSs can be extended to general case, that is, for any index set T , if M t S V N M S ( X ) , t T , we can define
t T M t = { x , t T T M t j ( x ) , t T I M t j ( x ) , t T F M t j ( x ) | x X , j = 1 , 2 , , l x } ,
and
t T M t = { x , t T T M t j ( x ) , t T I M t j ( x ) , t T F M t j ( x ) | x X , j = 1 , 2 , , l x } ,
where l x = max { l ( x : M t ) | t T } .
Proposition 1.
Let M , N and Q be three SVNMSs in X . We have the following operational properties:
  • (1) Commutation: M N = N M , M N = N M ;
  • (2) Association: M ( N Q ) = ( M N ) Q , M ( N Q ) = ( M N ) Q ;
  • (3) Idempotent: M M = M , M M = M ;
  • (4) Absorption: M ( M N ) = M , M ( M N ) = M ;
  • (5) Identity: M M ˜ = M ˜ ; M M ˜ = M , M Φ ˜ = M , M Φ ˜ = Φ ˜ ;
  • (6) Distribution: M ( N Q ) = ( M N ) ( M Q ) , M ( N Q ) = ( M N ) ( M Q ) ;
  • (7) Involution: ( M c ˜ ) c ˜ = M , ( M ˜ ) c ˜ = Φ ˜ , ( Φ ˜ ) c ˜ = M ˜ ;
  • (8) De Morgan: ( M N ) c ˜ = M c ˜ N c ˜ , ( M N ) c ˜ = M c ˜ N c ˜ .
Remark 2.
As we know, the complementation can be established in classical set, however, it is not true in SVNMS. For example, let X = { x 1 , x 2 , x 3 } , M S V N M S ( X ) as follows:
M = { x 1 , ( 0.5 , 0.3 ) , ( 0.1 , 0.1 ) , ( 0.7 , 0.8 ) , x 2 , ( 0.7 , 0.68 , 0.62 ) , ( 0.3 , 0.45 , 0.5 ) , ( 0.34 , 0.28 , 0.49 ) , x 3 , ( 0.67 , 0.5 , 0.3 ) , ( 0.2 , 0.3 , 0.4 ) , ( 0.4 , 0.5 , 0.7 ) } .
Obviously,
M M c ˜ = { x 1 , ( 0.7 , 0.8 ) , ( 0.1 , 0.1 ) , ( 0.5 , 0.3 ) , x 2 , ( 0.7 , 0.68 , 0.62 ) , ( 0.3 , 0.45 , 0.5 ) , ( 0.34 , 0.28 , 0.49 ) , x 3 , ( 0.67 , 0.5 , 0.7 ) , ( 0.2 , 0.3 , 0.4 ) , ( 0.4 , 0.5 , 0.3 ) } M ˜ ;
M M c ˜ = { x 1 , ( 0.5 , 0.3 ) , ( 0.9 , 0.9 ) , ( 0.7 , 0.8 ) , x 2 , ( 0.34 , 0.28 , 0.49 ) , ( 0.7 , 0.55 , 0.5 ) , ( 0.7 , 0.68 , 0.62 ) , x 3 , ( 0.4 , 0.5 , 0.3 ) , ( 0.8 , 0.7 , 0.6 ) , ( 0.67 , 0.5 , 0.7 ) } Φ ˜ .

4. Decomposition Theorem and Representation Theorem of SVNMS

In this section, the notions of cut sets, strong cut sets of SVNMS are defined. Some properties of cut sets are proposed. We also investigate decomposition theorem and representation theorem of SVNMS based on cut sets.

4.1. Decomposition Theorem

Definition 9.
Let X = { x 1 , x 2 , , x n } , A S V N M S ( X ) and α , β , γ [ 0 , 1 ] with 0 α + β + γ 3 . The α - c u t set of truth value function generated by A is defined as follows:
A α = { x i X | T A j ( x i ) α ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
The strong α - c u t set of truth value function generated by A is defined as follows:
A α + = { x i X | T A j ( x i ) > α ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
The β - c u t set of indeterminacy value function generated by A is defined as follows:
A β = { x i X | I A j ( x i ) β ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
The strong β - c u t set of indeterminacy value function generated by A is defined as follows:
A β + = { x i X | I A j ( x i ) < β ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
The γ - c u t set of falsity value function generated by A is defined as follows:
A γ = { x i X | F A j ( x i ) γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
The strong γ - c u t set of falsity value function generated by A is defined as follows:
A γ + = { x i X | F A j ( x i ) < γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } .
Next, we can define the ( α , β , γ ) - c u t sets as follows:
A ( α ,   β ,   γ ) = { x i X | T A j ( x i ) α , I A j ( x i ) β , F A j ( x i ) γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α + ,   β ,   γ ) = { x i X | T A j ( x i ) > α , I A j ( x i ) β , F A j ( x i ) γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α ,   β + ,   γ ) = { x i X | T A j ( x i ) α , I A j ( x i ) < β , F A j ( x i ) γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α ,   β ,   γ + ) = { x i X | T A j ( x i ) α , I A j ( x i ) β , F A j ( x i ) < γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α + ,   β + ,   γ ) = { x i X | T A j ( x i ) > α , I A j ( x i ) < β , F A j ( x i ) γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α + ,   β ,   γ + ) = { x i X | T A j ( x i ) > α , I A j ( x i ) β , F A j ( x i ) < γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α ,   β + ,   γ + ) = { x i X | T A j ( x i ) α , I A j ( x i ) < β , F A j ( x i ) < γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } ;
A ( α + ,   β + ,   γ + ) = { x i X | T A j ( x i ) > α , I A j ( x i ) < β , F A j ( x i ) < γ ; i = 1 , 2 , , n ; j = 1 , 2 , , l ( x i : A ) } .
The α - c u t sets, β - c u t sets, γ - c u t sets of SVNMS satisfy the following properties:
Theorem 1.
Let A , B S V N M S ( X ) , α , β , γ [ 0 , 1 ] with 0 α + β + γ 3 . Then,
(1) 
A B A α B α , A β B β , A γ B γ ;
(2) 
( A B ) α = A α B α , ( A B ) β = A β B β , ( A B ) γ = A γ B γ ;
(3) 
( A B ) α = A α B α , ( A B ) β = A β B β , ( A B ) γ = A γ B γ ;
(4) 
( t T A t ) α = t T ( A t ) α , ( t T A t ) β = t T ( A t ) β , ( t T A t ) γ = t T ( A t ) γ ;
(5) 
( t T A t ) α = t T ( A t ) α , ( t T A t ) β = t T ( A t ) β , ( t T A t ) γ = t T ( A t ) γ ;
(6) 
α 1 α 2 , β 1 β 2 , γ 1 γ 2 A α 1 A α 2 , A β 1 A β 2 , A γ 1 A γ 2 .
Proof. 
 
(1) Since x A α , we have T A j ( x ) α . From A B , it follows that T A j ( x ) T B j ( x ) . Thus, T B j ( x ) α . Thus, x B α . Therefore, A α B α for j = 1 , 2 , , l ( x : A , B ) . Since x A β , we have I A j ( x ) β . From A B , it follows that I A j ( x ) I B j ( x ) . Thus, I B j ( x ) β . Thus, x B β . Hence, A β B β for j = 1 , 2 , , l ( x : A , B ) .
(2) From x ( A B ) α , we can obtain T A B j ( x ) α . Then, min { T A j ( x ) , T B j ( x ) } α , that is, T A j ( x ) α , T B j ( x ) α . Thus, x A α , x B α . Hence, x A α B α . On the other hand, since x A α B α , we have x A α , x B α , that is, T A j ( x ) α , T B j ( x ) α . Then, min { T A j ( x ) , T B j ( x ) } α . Thus, T A B j ( x ) α . Hence, x ( A B ) α . Based on the above facts, we can check that ( A B ) α = A α B α for j = 1 , 2 , , l ( x : A , B ) .
Since x ( A B ) β , we have I A B j ( x ) β . Then, max { I A j ( x ) , I B j ( x ) } β , that is, I A j ( x ) β , I B j ( x ) β . Then, x A β , x B β . Hence, x A β B β . On the other hand, from x A β B β , we have x A β , x B β . Thus, I A j ( x ) β , I B j ( x ) β , that is, max { I A j ( x ) , I B j ( x ) } β . Thus, I A B j ( x ) β . Thus, x ( A B ) β . Therefore, we can check that ( A B ) β = A β B β for j = 1 , 2 , , l ( x : A , B ) .
(3) From x ( A B ) α , we have T A B j ( x ) α . Thus, max { T A j ( x ) , T B j ( x ) } α , that is, T A j ( x ) α or T B j ( x ) α . Thus, x A α or x B α . Hence, x A α B α . On the other hand, since x A α B α , we have x A α or x B α . Thus, T A j ( x ) α or T B j ( x ) α , that is, max { T A j ( x ) , T B j ( x ) } α . Thus, T A B j ( x ) α . Hence, x ( A B ) α . Using the above facts, we can check that ( A B ) α = A α B α for j = 1 , 2 , , l ( x : A , B ) .
Since x ( A B ) γ , we have F A B j ( x ) γ , that is, min { F A j ( x ) , F B j ( x ) } γ . Thus, F A j ( x ) γ or F B j ( x ) γ . Thus, x A γ or x B γ . Hence, x A γ B γ . On the other hand, from x A γ B γ , we have x A γ or x B γ . Thus, F A j ( x ) γ or F B j ( x ) γ , that is, min { F A j ( x ) , F B j ( x ) } γ . Thus, F A B j ( x ) γ . Hence, x ( A B ) γ . Therefore, we can check that ( A B ) γ = A γ B γ for j = 1 , 2 , , l ( x : A , B ) .
(4) From x ( t T A t ) α , we have T t T A t j ( x ) α , that is, inf t T { T A t j ( x ) } α . Thus, T A t j ( x ) α for all t T , that is, x ( A t ) α for all t T . Hence, x t T ( A t ) α . On the other hand, from x t T ( A t ) α , it follows that x ( A t ) α for all t T . Then, T A t j ( x ) α for all t T , that is, inf t T { T A t j ( x ) } α . Then, T t T A t j ( x ) α . Thus, x ( t T A t ) α . Based on the above facts, we can check that ( t T A t ) α = t T ( A t ) α for j = 1 , 2 , , l ( l = max { l ( x : A t ) | t T } ) .
Since x ( t T A t ) β , we have I t T A t j ( x ) β , that is, sup t T { I A t j ( x ) } β . Thus, I A t j ( x ) β for all t T , that is, x ( A t ) β for all t T . Thus, x t T ( A t ) β . On the other hand, from x t T ( A t ) β , we have x ( A t ) β for all t T . Thus, I A t j ( x ) β for all t T , that is, sup t T { I A t j ( x ) } β . Thus, I t T A t j ( x ) β . Hence, x ( t T A t ) β . Therefore, we can check that ( t T A t ) β = t T ( A t ) β for j = 1 , 2 , , l ( l = max { l ( x : A t ) | t T } ) .
(5) The proof of (5) is similar to Theorem 1 (4).
(6) The proof of (6) is obvious from Definition 9.
The ( α , β , γ ) - c u t sets of SVNMS satisfy the following properties. □
Theorem 2.
Let A , B S V N M S ( X ) , α , β , γ [ 0 , 1 ] with 0 α + β + γ 3 . Then,
(1) 
A ( α + , β + , γ + ) A ( α + , β + , γ ) A ( α + , β , γ ) A ( α , β , γ ) , A ( α + , β + , γ + ) A ( α + , β + , γ ) A ( α , β + , γ ) A ( α , β , γ ) ,
A ( α + , β + , γ + ) A ( α + , β , γ + ) A ( α + , β , γ ) A ( α , β , γ ) , A ( α + , β + , γ + ) A ( α + , β , γ + ) A ( α , β , γ + ) A ( α , β , γ ) ,
A ( α + , β + , γ + ) A ( α , β + , γ + ) A ( α , β + , γ ) A ( α , β , γ ) , A ( α + , β + , γ + ) A ( α , β + , γ + ) A ( α , β , γ + ) A ( α , β , γ ) ;
(2) 
A B A ( α , β , γ ) B ( α , β , γ ) , A ( α + , β , γ ) B ( α + , β , γ ) , A ( α , β + , γ ) B ( α , β + , γ ) , A ( α , β , γ + ) B ( α , β , γ + ) ,
A ( α + , β + , γ ) B ( α + , β + , γ ) , A ( α + , β , γ + ) B ( α + , β , γ + ) , A ( α , β + , γ + ) B ( α , β + , γ + ) , A ( α + , β + , γ + ) B ( α + , β + , γ + ) ;
(3) 
α 1 α 2 , β 1 β 2 , γ 1 γ 2 A ( α 1 , β 1 , γ 1 ) A ( α 1 + , β 1 + , γ 1 + ) A ( α 2 , β 2 , γ 2 ) A ( α 2 + , β 2 + , γ 2 + ) ;
(4) 
A ( α , β , γ ) = A α A β A γ ;
(5) 
( A B ) ( α , β , γ ) = A ( α , β , γ ) B ( α , β , γ ) , ( A B ) ( α + , β , γ ) = A ( α + , β , γ ) B ( α + , β , γ ) ,
( A B ) ( α , β + , γ ) = A ( α , β + , γ ) B ( α , β + , γ ) , ( A B ) ( α , β , γ + ) = A ( α , β , γ + ) B ( α , β , γ + ) ,
( A B ) ( α + , β + , γ ) = A ( α + , β + , γ ) B ( α + , β + , γ ) , ( A B ) ( α + , β , γ + ) = A ( α + , β , γ + ) B ( α + , β , γ + ) ,
( A B ) ( α , β + , γ + ) = A ( α , β + , γ + ) B ( α , β + , γ + ) , ( A B ) ( α + , β + , γ + ) = A ( α + , β + , γ + ) B ( α + , β + , γ + ) ;
(6) 
( A B ) ( α , β , γ ) A ( α , β , γ ) B ( α , β , γ ) , ( A B ) ( α + , β , γ ) A ( α + , β , γ ) B ( α + , β , γ ) ,
( A B ) ( α , β + , γ ) A ( α , β + , γ ) B ( α , β + , γ ) , ( A B ) ( α , β , γ + ) A ( α , β , γ + ) B ( α , β , γ + ) ,
( A B ) ( α + , β + , γ ) A ( α + , β + , γ ) B ( α + , β + , γ ) , ( A B ) ( α + , β , γ + ) A ( α + , β , γ + ) B ( α + , β , γ + ) ,
( A B ) ( α , β + , γ + ) A ( α , β + , γ + ) B ( α , β + , γ + ) , ( A B ) ( α + , β + , γ + ) A ( α + , β + , γ + ) B ( α + , β + , γ + ) ;
(7) 
( t T A t ) ( α , β , γ ) = t T ( A t ) ( α , β , γ ) , ( t T A t ) ( α + , β , γ ) = t T ( A t ) ( α + , β , γ ) , ( t T A t ) ( α , β + , γ ) = t T ( A t ) ( α , β + , γ ) ,
( t T A t ) ( α , β , γ + ) = t T ( A t ) ( α , β , γ + ) , ( t T A t ) ( α + , β + , γ ) = t T ( A t ) ( α + , β + , γ ) , ( t T A t ) ( α + , β , γ + ) = t T ( A t ) ( α + , β , γ + ) ,
( t T A t ) ( α , β + , γ + ) = t T ( A t ) ( α , β + , γ + ) , ( t T A t ) ( α + , β + , γ + ) = t T ( A t ) ( α + , β + , γ + ) ;
(8) 
( t T A t ) ( α , β , γ ) t T ( A t ) ( α , β , γ ) , ( t T A t ) ( α + , β , γ ) t T ( A t ) ( α + , β , γ ) , ( t T A t ) ( α , β + , γ ) t T ( A t ) ( α , β + , γ ) ,
( t T A t ) ( α , β , γ + ) t T ( A t ) ( α , β , γ + ) , ( t T A t ) ( α + , β + , γ ) t T ( A t ) ( α + , β + , γ ) , ( t T A t ) ( α + , β , γ + ) t T ( A t ) ( α + , β , γ + ) ,
( t T A t ) ( α , β + , γ + ) t T ( A t ) ( α , β + , γ + ) , ( t T A t ) ( α + , β + , γ + ) t T ( A t ) ( α + , β + , γ + ) ;
(9) 
t T A ( α t , β t , γ t ) = A ( α , β , γ ) , t T A ( α t + , β t , γ t ) = A ( α + , β , γ ) , t T A ( α t , β t + , γ t ) = A ( α , β + , γ ) , t T A ( α t , β t , γ t + ) = A ( α , β , γ + ) ,
t T A ( α t + , β t + , γ t ) = A ( α + , β + , γ ) , t T A ( α t + , β t , γ t + ) = A ( α + , β , γ + ) , t T A ( α t , β t + , γ t + ) = A ( α , β + , γ + ) , t T A ( α t + , β t + , γ t + ) = A ( α + , β + , γ + ) ,
where e α = t T α t , β = t T β t , γ = t T γ t .
Proof. 
The proofs of (1)~(4) are obtained directly from Definition 9. We denote,
A B = { x , max { T A j ( x ) , T B j ( x ) } , min { I A j ( x ) , I B j ( x ) } , min { F A j ( x ) , F B j ( x ) } } ,
A B = { x , min { T A j ( x ) , T B j ( x ) } , max { I A j ( x ) , I B j ( x ) } , max { F A j ( x ) , F B j ( x ) } } ,
where j = 1 , 2 , , l ( x : A , B ) .
t T A t = { x , s u p t T { T A t j ( x ) } , inf t T { I A t j ( x ) } , inf t T { F A t j ( x ) } } ,
t T A t = { x , inf t T { T A t j ( x ) } , sup t T { I A t j ( x ) } , sup t T { F A t j ( x ) } } ,
j = 1 , 2 , , l where l = max { l ( x : A t ) | t T } .
(5) From x ( A B ) ( α , β , γ ) , we have min { T A j ( x ) , T B j ( x ) } α , max { I A j ( x ) , I B j ( x ) } β , max { F A j ( x ) , F B j ( x ) } γ , that is, T A j ( x ) α and T B j ( x ) α , I A j ( x ) β and I B j ( x ) β , F A j ( x ) γ and F B j ( x ) γ . Thus, T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ and T B j ( x ) α , I B j ( x ) β , F B j ( x ) γ , that is, x A ( α , β , γ ) , x B ( α , β , γ ) . Hence, x A ( α , β , γ ) B ( α , β , γ ) . On the other hand, since x A ( α , β , γ ) B ( α , β , γ ) , we have x A ( α , β , γ ) , x B ( α , β , γ ) , that is, T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ and T B j ( x ) α , I B j ( x ) β , F B j ( x ) γ . Thus, T A j ( x ) α and T B j ( x ) α , I A j ( x ) β and I B j ( x ) β , F A j ( x ) γ and F B j ( x ) γ . Hence, min { T A j ( x ) , T B j ( x ) } α , max { I A j ( x ) , I B j ( x ) } β , max { F A j ( x ) , F B j ( x ) } γ . So, x ( A B ) ( α , β , γ ) . Therefore, ( A B ) ( α , β , γ ) = A ( α , β , γ ) B ( α , β , γ ) for j = 1 , 2 , , l ( x : A , B ) .
(6) Since x A ( α , β , γ ) B ( α , β , γ ) , we have x A ( α , β , γ ) or x B ( α , β , γ ) , that is, T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ or T B j ( x ) α , I B j ( x ) β , F B j ( x ) γ . Thus, T A j ( x ) α or T B j ( x ) α , I A j ( x ) β or I B j ( x ) β , F A j ( x ) γ or F B j ( x ) γ , that is, max { T A j ( x ) , T B j ( x ) } α , min { I A j ( x ) , I B j ( x ) } β , min { F A j ( x ) , F B j ( x ) } γ . Thus, x ( A B ) ( α , β , γ ) . Therefore, A ( α , β , γ ) B ( α , β , γ ) ( A B ) ( α , β , γ ) for j = 1 , 2 , , l ( x : A , B ) .
(7) From x ( t T A t ) ( α , β , γ ) , we have inf t T { T A t j ( x ) } α , sup t T { I A t j ( x ) } β , sup t T { F A t j ( x ) } γ , that is, T A t j ( x ) α , I A t j ( x ) β , F A t j ( x ) γ for all t T . Thus, x ( A t ) ( α , β , γ ) for all t T . Hence, x t T ( A t ) ( α , β , γ ) . On the other hand, for any x t T ( A t ) ( α , β , γ ) , we have x ( A t ) ( α , β , γ ) for all t T , that is, T A t j ( x ) α , I A t j ( x ) β , F A t j ( x ) γ for all t T . Thus,
inf t T { F A t j ( x ) } α , sup t T { I A t j ( x ) } β , sup t T { F A t j ( x ) } γ .
Hence, x ( t T A t ) ( α , β , γ ) . Therefore, ( t T A t ) ( α , β , γ ) = t T ( A t ) ( α , β , γ ) for j = 1 , 2 , , l ( x : A , B ) .
(8) The proof of (8) is similar to that of (6).
(9) Since x t T A ( α t , β t , γ t ) , we have x A ( α t , β t , γ t ) for all t T , that is,
T A j ( x ) α t , I A j ( x ) β t , F A j ( x ) γ t   for   all   t T .
Thus, T A j ( x ) t T α t , I A j ( x ) t T β t , F A j ( x ) t T γ t , that is, T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ . Thus, x A ( α , β , γ ) . On the other hand, from x A ( α , β , γ ) , we have T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ , that is, T A j ( x ) t T α t , I A j ( x ) t T β t , F A j ( x ) t T γ t for all t T . Thus, T A j ( x ) α t , I A j ( x ) β t , F A j ( x ) γ t for all t T . Thus, x A ( α t , β t , γ t ) for all t T . Hence, x t T A ( α t , β t , γ t ) . Therefore, t T A ( α t , β t , γ t ) = A ( α , β , γ ) for j = 1 , 2 , , j = 1 , 2 , , l ( l = max { l ( x : A t ) | t T } ) . □
Remark 3.
In property (6) ( A B ) ( α , β , γ ) A ( α , β , γ ) B ( α , β , γ ) , “ ” cannot be strengthened as “ = ”. For example, let X = { x 1 , x 2 , x 3 } , A , B S V N M S ( X ) as follows:
A = { x 1 , ( 0.5 , 0.3 ) , ( 0.1 , 0.1 ) , ( 0.7 , 0.8 ) , x 2 , ( 0.7 , 0.68 , 0.62 ) , ( 0.3 , 0.45 , 0.5 ) , ( 0.34 , 0.28 , 0.49 ) , x 3 , ( 0.67 , 0.5 , 0.3 ) , ( 0.2 , 0.3 , 0.4 ) , ( 0.4 , 0.5 , 0.7 ) }
B = { x 1 , 0.75 , 0.2 , 0.15 , x 2 , ( 0.43 , 0.37 , 0.28 ) , ( 0.5 , 0.2 , 0.3 ) , ( 0.7 , 0.8 , 0.9 ) , x 3 , ( 1.0 , 0.86 , 0.79 ) , ( 0.01 , 0.1 , 0.2 ) , ( 0.0 , 0.3 , 0.2 ) } .
If we choose α = 0.4 , β = 0.3 , γ = 0.5 , then,
A ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 1 , 1 ) , ( 0 , 0 ) , x 2 , ( 1 , 1 , 1 ) , ( 1 , 0 , 0 ) , ( 1 , 1 , 1 ) , x 3 , ( 1 , 1 , 0 ) , ( 1 , 1 , 0 ) , ( 1 , 1 , 0 ) } ,
B ( α , β , γ ) = { x 1 , 1 , 1 , 1 , x 2 , ( 1 , 0 , 0 ) , ( 0 , 1 , 1 ) , ( 0 , 0 , 0 ) , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
( A B ) ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 1 , 1 ) , ( 1 , 0 ) , x 2 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
A ( α , β , γ ) B ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 1 , 1 ) , ( 0 , 0 ) , x 2 , ( 1 , 1 , 1 ) , ( 0 , 0 , 0 ) , ( 0 , 0 , 0 ) , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 0 ) , ( 1 , 1 , 0 ) }  
Obviously, ( A B ) ( α , β , γ ) A ( α , β , γ ) B ( α , β , γ ) .
In order to get the decomposition theorem of SVNMS, we also need to introduce the following important concepts.
Definition 10.
Let L = { ( α , β , γ ) | α , β , γ [ 0 , 1 ] , 0 α + β + γ 3 } , ( α 1 , β 1 , γ 1 ) ( α 2 , β 2 , γ 2 ) α 1 α 2 , β 1 β 2 , γ 1 γ 2 So, L is a complete lattice, and ( 1 , 0 , 0 ) is the biggest element, ( 0 , 1 , 1 ) is the smallest element.
Definition 11.
Let ( α , β , γ ) L , B 2 X , A = ( α , β , γ ) B . And for any x X ,
T A j ( x ) = { α , x B 0 , x B ,   I A j ( x ) = { β , x B 1 , x B ,   F A j ( x ) = { γ , x B 1 , x B .
Then, A = { x , T A j ( x ) , I A j ( x ) , F A j ( x ) | x X , j = 1 , 2 , , l ( x : A ) } is a SVNMS on the universe X , so we have the definition as follows:
Definition 12.
Suppose A S V N M S ( X ) , ( α , β , γ ) L , the dot product (truncated product) of ( α , β , γ ) and A is defined as
( ( α , β , γ ) A ) ( x ) = { x , α T A j ( x ) , β I A j ( x ) , , γ F A j ( x ) | x X , j = 1 , 2 , , l ( x : A ) } .
That is ( α , β , γ ) A S V N M S ( X ) .
Now, we can discuss the decomposition theorem of SVNMS based on the definitions and operational properties above.
Theorem 3.
Let A be a SVNMS. Then for any ( α , β , γ ) L , we have
A = ( α , β , γ ) L ( α , β , γ ) A ( α , β , γ ) = ( α , β , γ ) L ( α , β , γ ) A ( α + , β , γ ) = ( α , β , γ ) L ( α , β , γ ) A ( α , β + , γ )  
= ( α , β , γ ) L ( α , β , γ ) A ( α , β , γ + ) = ( α , β , γ ) L ( α , β , γ ) A ( α + , β + , γ ) = ( α , β , γ ) L ( α , β , γ ) A ( α + , β , γ + )  
= ( α , β , γ ) L ( α , β , γ ) A ( α , β + , γ + ) = ( α , β , γ ) L ( α , β , γ ) A ( α + , β + , γ + )  
Proof. 
With regard to A = ( α , β , γ ) L ( α , β , γ ) A ( α , β , γ ) , we just need to prove A ( x ) = ( ( α , β , γ ) L ( α , β , γ ) A ( α , β , γ ) ) ( x ) for all x X . That is, A ( x ) = ( α , β , γ ) L ( ( α , β , γ ) A ( α , β , γ ) ) ( x ) = ( α [ 0 , 1 ] ( α ( A α ) j ( x ) ) , β [ 0 , 1 ] ( β ( A β ) j ( x ) ) , γ [ 0 , 1 ] ( γ ( A γ ) j ( x ) ) ) for x X . Since T A j ( x ) [ 0 , 1 ] , we have α [ 0 , 1 ] ( α ( A α ) j ( x ) ) = [ α [ 0 , T A j ( x ) ] ( α ( A α ) j ( x ) ) ] [ α [ T A j ( x ) , 1 ] ( α ( A α ) j ( x ) ) ] . Indeed, taking α T A j ( x ) , we have ( A α ) j ( x ) = 1 , otherwise, ( A α ) j ( x ) = 0 . Thus, α [ 0 , 1 ] ( α ( A α ) j ( x ) ) = α [ 0 , T A j ( x ) ] ( α ( A α ) j ( x ) ) = α [ 0 , T A j ( x ) ] α = T A j ( x ) . Similarly, β [ 0 , 1 ] ( β ( A β ) j ( x ) ) = β [ I A j ( x ) , 1 ] ( β ( A β ) j ( x ) ) = β [ I A j ( x ) , 1 ] β = I A j ( x ) and γ [ 0 , 1 ] ( γ ( A γ ) j ( x ) ) = γ [ F A j ( x ) , 1 ] ( γ ( A γ ) j ( x ) ) = γ [ F A j ( x ) , 1 ] γ = F A j ( x ) .
Therefore, ( α , β , γ ) L ( ( α , β , γ ) A ( α , β , γ ) ) ( x ) = ( T A j ( x ) , I A j ( x ) , F A j ( x ) ) = A ( x ) for j = 1 , 2 , l ( x : A ) . □
Next, we use an example to illustrate the idea of the decomposition theorem of SVNMS.
Example 1.
Let X = { x 1 , x 2 , x 3 } , A S V N M S ( X ) as follows:
A = { x 1 , ( 0.6 , 0.4 ) , ( 0.5 , 0.3 ) , ( 0.2 , 0.3 ) , x 2 , 0.2 , 0.4 , 0.7 , x 3 , ( 0.8 , 0.6 , 0.5 ) , ( 0.2 , 0.2 , 0.3 ) , ( 0.1 , 0.3 , 0.4 ) } .
We show how A can be represented by180 special SVNMSs using ( α , β , γ ) - c u t sets. According to Definition 9, 11 and 12, we have:
A ( α , β , γ ) = { x 1 , ( 1 , 1 ) , ( 0 , 0 ) , ( 0 , 0 ) , x 2 , 1 , 0 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 0 ) , ( 1 , 0 , 0 ) } ,
( α , β , γ ) A ( α , β , γ ) = { x 1 , ( 0.2 , 0.2 ) , ( 1 , 1 ) , ( 1 , 1 ) , x 2 , 0.2 , 1 , 1 , x 3 , ( 0.2 , 0.2 , 0.2 ) , ( 0.2 , 0.2 , 1 ) , ( 0.1 , 1 , 1 ) } ,
where 0 α 0.2 , 0 β 0.2 , 0 γ 0.1 ;
A ( α , β , γ ) = { x 1 , ( 1 , 1 ) , ( 0 , 1 ) , ( 1 , 0 ) , x 2 , 0 , 0 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 0 , 0 ) } ,
( α , β , γ ) A ( α , β , γ ) = { x 1 , ( 0.4 , 0.4 ) , ( 1 , 0.3 ) , ( 0.2 , 1 ) , x 2 , 1 , 1 , 1 , x 3 , ( 0.4 , 0.4 , 0.4 ) , ( 0.3 , 0.3 , 0.3 ) , ( 0.2 , 1 , 1 ) } ,
where 0.2 < α 0.4 , 0.2 < β 0.3 , 0.1 < γ 0.2 ;
A ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 0 , 1 ) , ( 1 , 1 ) ,   x 2 , 0 , 1 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 0 ) } ,
( α , β , γ ) A ( α , β , γ ) = { x 1 , ( 0.5 , 0 ) , ( 1 , 0.4 ) , ( 0.3 , 0.3 ) , x 2 , 0 , 0.4 , 1 , x 3 , ( 0.5 , 0.5 , 0.5 ) , ( 0.4 , 0.4 , 0.4 ) , ( 0.3 , 0.3 , 1 ) } ,
where 0.4 < α 0.5 , 0.3 < β 0.4 , 0.2 < γ 0.3 ;
A ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) ,   x 2 , 0 , 1 , 0 , x 3 , ( 1 , 1 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
( α , β , γ ) A ( α , β , γ ) = { x 1 , ( 0.6 , 0 ) , ( 0.5 , 0.5 ) , ( 0.4 , 0.4 ) , x 2 , 0 , 0.5 , 1 , x 3 , ( 0.6 , 0.6 , 0 ) , ( 0.5 , 0.5 , 0.5 ) , ( 0.4 , 0.4 , 0.4 ) } ,
where 0.5 < α 0.6 , 0.4 < β 0.5 , 0.3 < γ 0.4 ;
A ( α , β , γ ) = { x 1 , ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) , x 2 , 0 , 1 , 1 , x 3 , ( 1 , 0 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
( α , β , γ ) A ( α , β , γ ) = { x 1 , ( 0 , 0 ) , ( 1 , 1 ) , ( 0.7 , 0.7 ) , x 2 , 0 , 1 , 0.7 , x 3 , ( 0.8 , 0 , 0 ) , ( 1 , 1 , 1 ) , ( 0.7 , 0.7 , 0.7 ) } ,
where 0.6 < α 0.8 , 0.5 < β 1 , 0.4 < γ 0.7 ;
A ( α , β , γ ) = { x 1 , ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) , x 2 , 0 , 1 , 1 , x 3 , ( 0 , 0 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
( α , β , γ ) A ( α , β , γ ) = { x 1 , ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) , x 2 , 0 , 1 , 1 , x 3 , ( 0 , 0 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
where 0.8 < α 1 , 0.5 < β 1 , 0.7 < γ 1 .
Similarly, we can get the rest of the results with special SVNMSs. It is obvious to see,
A = ( α , β , γ ) L ( α , β , γ ) A ( α , β , γ ) .
Definition 13.
Suppose H : L 2 X , ( λ , μ , ω ) H ( λ , μ , ω ) is a mapping, a neutrosophic nested set H can be defined in X if it satisfies the following conditions:
(1) 
( λ 1 , μ 1 , ω 1 ) ( λ 2 , μ 2 , ω 2 ) H ( λ 1 , μ 1 , ω 1 ) H ( λ 2 , μ 2 , ω 2 ) ;
(2) 
t T H ( λ t , μ t , ω t ) { H ( λ , μ , ω ) | λ < t T λ t , μ > t T μ t , ω > t T ω t } .
Remark 4.
Let S V N L be a set which composed of all neutrosophic nested sets, A S V N M S ( X ) , then, all ( α , β , γ ) - c u t sets of A are neutrosophic nested sets.
Theorem 4.
Let A S V N M S ( X ) , H : L 2 X , ( α , β , γ ) H ( α , β , γ ) , for any ( α , β , γ ) L satisfy
A ( α + , β + , γ + ) H ( α , β , γ ) A ( α , β , γ ) , then
(1) 
A = ( α , β , γ ) L ( α , β , γ ) H ( α , β , γ ) ;
(2) 
α 1 < α 2 , β 1 > β 2 , γ 1 > γ 2 H ( α 1 , β 1 , γ 1 ) H ( α 2 , β 2 , γ 2 ) , where α 1 , α 2 , β 1 , β 2 , γ 1 , γ 2 [ 0 , 1 ] , 0 α 1 + β 1 + γ 1 3 , 0 α 2 + β 2 + γ 2 3 ;
(3) 
(I) A ( α , β , γ ) = { H ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } ,
(II) A ( α + , β + , γ + ) = { H ( λ , μ , ω ) | λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 } ;
(4) 
t T H ( α t , β t , γ t ) { H ( α , β , γ ) | α < t T α t , β > t T β t , γ > t T γ t } .
Proof. 
 
(1) Since A ( α + , β + , γ + ) H ( α , β , γ ) A ( α , β , γ ) for all ( α , β , γ ) L , we have
A = ( α , β , γ ) L ( α , β , γ ) A ( α + , β + , γ + ) ( α , β , γ ) L ( α , β , γ ) H ( α , β , γ ) ( α , β , γ ) L ( α , β , γ ) A ( α , β , γ ) = A .
Thus, A = ( α , β , γ ) L ( α , β , γ ) H ( α , β , γ ) .
(2) From α 1 < α 2 , β 1 > β 2 , γ 1 > γ 2 , we can obtain
H ( α 1 , β 1 , γ 1 ) A ( α 1 + , β 1 + , γ 1 + ) A ( α 2 , β 2 , γ 2 ) H ( α 2 , β 2 , γ 2 ) .
(3) (I) Suppose = { ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } , then,
( λ , μ , ω ) ( λ , μ , ω ) = ( α , β , γ ) .
So, { H ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } { A ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } = A ( α , β , γ ) . On the other hand, since x A ( α , β , γ ) , we have T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ . Thus, T A j ( x ) α > λ , I A j ( x ) β < μ , F A j ( x ) γ < ω . That is, T A j ( x ) > λ , I A j ( x ) < μ , F A j ( x ) < ω . Thus, x A ( λ + , μ + , ω + ) . Thus, x H ( λ , μ , ω ) . Therefore, x { H ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } . Based on the above facts, we can obtain A ( α , β , γ ) = { H ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } .
(II) Since A ( α + , β + , γ + ) A ( λ ,   , μ , γ ) H ( λ , μ , γ ) for any λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 , we have
A ( α + , β + , γ + ) { H ( λ , μ , ω ) | λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 } . On the other hand, from x A ( α + , β + , γ + ) we have T A j ( x ) > α , I A j ( x ) < β , F A j ( x ) < γ . It follows that there exists λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 , such that T A j ( x ) > λ > α , I A j ( x ) < μ < β , F A j ( x ) < ω < γ , that is, x A ( λ + , μ + , ω + ) . Indeed, A ( λ + , μ + , ω + ) H ( λ , μ , ω ) , then, x H ( λ , μ , ω ) . Thus, x { H ( λ , μ , ω ) | λ > α , μ < β , ω < γ ,   0 λ + μ + ω 3 } . Thus,
A ( α + , β + , γ + ) { H ( λ , μ , ω ) | λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 } . Therefore, we can obtain
A ( α + , β + , γ + ) = { H ( λ , μ , ω ) | λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 } .
(4)
From A ( α + , β + , γ + ) H ( α , β , γ ) A ( α , β , γ ) , we have t T H ( α t , β t , γ t ) t T A ( α t , β t , γ t ) A ( α , β , γ ) for α = t T α t , β = t T β t , γ = t T γ t . Applying (3) (I), we get
A ( α , β , γ ) = { H ( α , β , γ ) | α < α , β > β , γ > γ , 0 α + β + γ 3 } + γ 3 } .
Therefore,
t T H ( α t , β t , γ t ) { H ( α , β , γ ) | α < α , β > β , γ > γ , 0 α + β + γ 3 }  
Remark 5.
(1) The significance of Theorem 3 (Decomposition Theorem): A SVNMS can be composed of neutrosophic nested sets which consist of self-decomposed cut sets or strong cut sets. (2) The significance of Theorem 4 (Generalized Decomposition Theorem): A collection of family sandwiched between cut or strong cut sets of a SVNMS must be neutrosophic nested sets, and such nested sets can also compose the original SVNMS.

4.2. Representation Theorem of SVNMS

According to the relationship between the decomposition theorem and the representation theorem, we can obtain that each neutrosophic nested set can be combined into a single-valued neutrosophic multiset. Furthermore, its cut sets or strong cut sets can be constructed with the original neutrosophic nested set. In other words, it is theoretically explained: a family of special single-valued neutrosophic multisets can be used to completely depict and represent a single-valued neutrosophic multiset).
In this section, the representation theorem of SVNMS based on the decomposition theorem is proposed in this section.
Theorem 5.
Let H S V N L ( X ) , A S V N M S ( X ) , and ( α , β , γ ) L . We have
  • (I) A ( α , β , γ ) = { H ( λ , μ , ω ) | λ < α , μ > β , ω > γ , 0 λ + μ + ω 3 } ;
  • (II) A ( α + , β + , γ + ) = { H ( λ , μ , ω ) | λ > α , μ < β , ω < γ , 0 λ + μ + ω 3 } .
Proof. 
Since H ( α , β , γ ) 2 X for all ( α , β , γ ) L , and ( α , β , γ ) H ( α , β , γ ) S V N M S ( X ) , we have ( α , β , γ ) L ( α , β , γ ) H ( α , β , γ ) S V N M S ( X ) , denoted by A . Applying Theorem 4, we only need to prove,
H : L 2 X   satisfies   A ( α + , β + , γ + ) H ( α , β , γ ) A ( α , β , γ ) .
Since x A ( α + , β + , γ + ) , we have T A j ( x ) > α , I A j ( x ) < β , F A j ( x ) < γ . Thus, ( λ , μ , ω ) L [ λ ( H ( λ ) ) j ( x ) ] > α ,
( λ , μ , ω ) L [ μ ( H ( μ ) ) j ( x ) ] < β , ( λ , μ , ω ) L [ ω ( H ( ω ) ) j ( x ) ] < γ . It follows that there exists ( λ 0 , μ 0 , ω 0 ) L , such that λ 0 ( H ( λ 0 ) ) j ( x ) > α , μ 0 ( H ( μ 0 ) ) j ( x ) < β , ω 0 ( H ( ω 0 ) ) j ( x ) < γ , that is, λ 0 > α , μ 0 < β , ω 0 < γ . Taking ( H ( λ 0 , μ 0 , ω 0 ) ) j ( x ) = ( 1 , 1 , 1 ) , we have ( λ 0 , μ 0 , ω 0 ) > ( α , β , γ ) . Thus, x H ( λ 0 , μ 0 , ω 0 ) H ( α , β , γ ) . On the other hand, from x H ( α , β , γ ) , we have ( H ( λ , μ , ω ) ) j ( x ) = ( 1 , 1 , 1 ) . Thus, ( λ , μ , ω ) L [ λ ( H ( λ ) ) j ( x ) ] α ( H ( α ) ) j ( x ) = α , ( λ , μ , ω ) L [ μ ( H ( μ ) ) j ( x ) ] β ( H ( β ) ) j ( x ) = β and
( λ , μ , ω ) L [ ω ( H ( ω ) ) j ( x ) ] γ ( H ( γ ) ) j ( x ) = γ , that is, T A j ( x ) α , I A j ( x ) β , F A j ( x ) γ . Thus, x A ( α , β , γ ) . Therefore, A ( α + , β + , γ + ) H ( α , β , γ ) A ( α , β , γ ) for j = 1 , 2 , l ( x : A , B ) . □
Theorem 5 (Representation Theorem) provides an effective method for constructing a SVNMS:
Let H S V N L ( X ) , we can construct a SVNMS with the following membership function:
A : X L ,   A ( x ) = { ( α , β , γ ) L | x H ( α , β , γ ) } ,   x X  
Example 2.
Suppose X = { x 1 , x 2 , x 3 } . The neutrosophic nested sets on the given X is as follows:
H ( α , β , γ ) = { x 1 , ( 1 , 1 ) , ( 0 , 0 ) , ( 0 , 0 ) , x 2 , 1 , 0 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 0 , 0 , 0 ) , ( 0 , 0 , 0 ) } ,
where α = β = γ = 0 ;
H ( α , β , γ ) = { x 1 , ( 1 , 1 ) , ( 0 , 0 ) , ( 0 , 0 ) , x 2 , 1 , 0 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 0 ) , ( 1 , 0 , 0 ) } ,
where 0 < α 0.2 , 0 < β 0.2 , 0 < γ 0.1 ;
H ( α , β , γ ) = { x 1 , ( 1 , 1 ) , ( 0 , 1 ) , ( 1 , 0 ) , x 2 , 0 , 0 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 0 , 0 ) } ,
where 0.2 < α 0.4 , 0.2 < β 0.3 , 0.1 < γ 0.2 ;
H ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 0 , 1 ) , ( 1 , 1 ) ,   x 2 , 0 , 1 , 0 , x 3 , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 0 ) } ,
where 0.4 < α 0.5 , 0.3 < β 0.4 , 0.2 < γ 0.3 ;
H ( α , β , γ ) = { x 1 , ( 1 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) ,   x 2 , 0 , 1 , 0 , x 3 , ( 1 , 1 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
where 0.5 < α 0.6 , 0.4 < β 0.5 , 0.3 < γ 0.4 ;
H ( α , β , γ ) = { x 1 , ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) , x 2 , 0 , 1 , 1 , x 3 , ( 1 , 0 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
where 0.6 < α 0.8 , 0.5 < β 1 , 0.4 < γ 0.7 ;
H ( α , β , γ ) = { x 1 , ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) , x 2 , 0 , 1 , 1 , x 3 , ( 0 , 0 , 0 ) , ( 1 , 1 , 1 ) , ( 1 , 1 , 1 ) } ,
where 0.8 < α 1 , 0.5 < β 1 , 0.7 < γ 1 .
Similarly, we can give the remaining neutrosophic nested sets.
Then, the SVNMS A determined by H has the following membership function:
( ( T A 1 ( x 1 ) , T A 2 ( x 1 ) ) , ( I A 1 ( x 1 ) , I A 2 ( x 1 ) ) , ( F A 1 ( x 1 ) , F A 2 ( x 1 ) ) ) = ( { α [ 0 , 1 ] | x 1 H ( α , β , γ ) } , { β [ 0 , 1 ] | x 1 H ( α , β , γ ) } , { γ [ 0 , 1 ] | x 1 H ( α , β , γ ) } ) = ( ( 0.6 , 0.4 ) , ( 0.5 , 0.3 ) , ( 0.2 , 0.3 ) ) ( T A 1 ( x 2 ) , I A 1 ( x 2 ) , F A 1 ( x 2 ) ) = ( { α [ 0 , 1 ] | x 2 H ( α , β , γ ) } , { β [ 0 , 1 ] | x 2 H ( α , β , γ ) } , { γ [ 0 , 1 ] | x 2 H ( α , β , γ ) } ) = ( 0.2 , 0.4 , 0.7 ) ( ( T A 1 ( x 3 ) , T A 2 ( x 3 ) , T A 3 ( x 3 ) ) , ( I A 1 ( x 3 ) , I A 2 ( x 3 ) , I A 3 ( x 3 ) ) , ( F A 1 ( x 3 ) , F A 2 ( x 3 ) , F A 3 ( x 3 ) ) ) = ( { α [ 0 , 1 ] | x 3 H ( α , β , γ ) } , { β [ 0 , 1 ] | x 3 H ( α , β , γ ) } , { γ [ 0 , 1 ] | x 3 H ( α , β , γ ) } ) = ( ( 0.8 , 0.6 , 0.5 ) , ( 0.2 , 0.2 , 0.3 ) , ( 0.1 , 0.3 , 0.4 ) )
Therefore,
A = { x 1 , ( 0.6 , 0.4 ) , ( 0.5 , 0.3 ) , ( 0.2 , 0.3 ) , x 2 , 0.2 , 0.4 , 0.7 , x 3 , ( 0.8 , 0.6 , 0.5 ) , ( 0.2 , 0.2 , 0.3 ) , ( 0.1 , 0.3 , 0.4 ) } .

5. New Similarity Measure between SVNMSs

On the basis of the decomposition theorem of SVNMS, this section presents a new similarity measure between SVNMSs. Then, we discuss the properties of this new similarity measure and give a concrete algorithm by example.
Definition 14.
Let M = { x , T M j ( x ) , I M j ( x ) , F M j ( x ) | x X , j = 1 , 2 , , l ( x : M ) } and N = { x , T N j ( x ) , I N j ( x ) , F N j ( x ) | x X , j = 1 , 2 , , l ( x : N ) } be two SVNMSs in X . Suppose V = [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , 1 ] . Then, we define a new distance measure between M and N as follows:
D C ( M , N ) = V f ( α , β , γ ) d V  
where f ( α , β , γ ) = D P ( ( α , β , γ ) M ( α , β , γ ) , ( α , β , γ ) N ( α , β , γ ) ) , α [ 0 , 1 ] , β [ 0 , 1 ] , γ [ 0 , 1 ] .
Proposition 2.
Let M , N be two SVNMSs in X . Then, the following properties hold (DC1-DC4):
  • (DC1) 0 D C ( M , N ) 1 ;
  • (DC2) D C ( M , N ) = 0 if and only if M = N ;
  • (DC3) D C ( M , N ) = D C ( M , N ) ;
  • (DC4) If Q is a SVNMS in X and M N Q , then, D C ( M , Q ) D C ( M , N ) + D C ( N , Q ) for P > 0 .
According to the relationship between distance measure and similarity measures, we can introduce two distance-based similarity measures between M and N :
S C 1 ( M , N ) = 1 D C ( M , N )  
S C 2 ( M , N ) = 1 D C ( M , N ) 1 + D C ( M , N )  
Proposition 3.
Let M , N S V N M S ( X ) . The distance-based similarity measures S C f ( M , N ) , ( f = 1 , 2 ) hold the following properties (SC1-SC4):
  • (SC1) 0 S C f ( M , N ) 1 ;
  • (SC2) S C f ( M , N ) = 1 if and only if M = N ;
  • (SC3) S C f ( M , N ) = S C f ( N , M ) ;
  • (SC4) If Q is a SVNMS in X and M N Q , then S C f ( M , Q ) S C f ( M , N ) + S C f ( N , Q ) .
Proof. 
The proofs of proposition 2 and 3 are straightforward. □
This method is based on the cut sets, and uses the idea of the decomposition theorem to convert the similarity measure between the two SVNMSs into the similarity measure between the corresponding special SVNMSs. Now, let us use a concrete example to illustrate the specific algorithm.
Example 3.
Let X = { x 1 , x 2 , x 3 } , M , N S V N M S ( X ) . That is, M = { x 1 , ( 0.7 , 0.8 ) , ( 0.1 , 0.2 ) , ( 0.2 , 0.3 ) , x 2 , ( 0.5 , 0.6 ) , ( 0.2 , 0.3 ) , ( 0.4 , 0.5 ) } , N = { x 1 , ( 0.5 , 0.6 ) , ( 0.1 , 0.2 ) , ( 0.4 , 0.5 ) , x 2 , ( 0.6 , 0.7 ) , ( 0.1 , 0.2 ) , ( 0.7 , 0.8 ) } .
According to the values of T M j ( x i ) , T N j ( x i ) ( i = 1 , 2 ; j = 1 , 2 ) , we divide the interval [ 0 , 1 ] of α into 5 subintervals: [ 0 , 0.5 ] , ( 0.5 , 0.6 ] , ( 0.6 , 0.7 ] , ( 0.7 , 0.8 ] , ( 0.8 , 1 ] . Similarly, we can obtain 4 subintervals of β : [ 0 , 0.1 ] , ( 0.1 , 0.2 ] , ( 0.2 , 0.3 ] , ( 0.3 , 1 ] , and 7 subintervals of γ : [ 0 , 0.2 ] , ( 0.2 , 0.3 ] , ( 0.3 , 0.4 ] , ( 0.4 , 0.5 ] , ( 0.5 , 0.7 ] , ( 0.7 , 0.8 ] , ( 0.8 , 1 ] . Thus, we have 140 interval combinations of α , β , and γ , take 0 α 0.5 , 0.2 < β 0.3 , 0.7 < γ 0.8 for example. In this way, for each combination of interval, we can get the corresponding M ( α , β , γ ) , N ( α , β , γ ) and ( α , β , γ ) M ( α , β , γ ) , ( α , β , γ ) N ( α , β , γ ) . Based on the above results, the process is as follows:
Step1, calculate f ( α , β , γ ) = D P ( ( α , β , γ ) M ( α , β , γ ) , ( α , β , γ ) N ( α , β , γ ) ) in every interval combination.
Step2, use Equation (24) to perform the integral operation on f ( α , β , γ ) over V = [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , 1 ] , and get D C ( M , N ) = 0.2206 .
Step3, using Equation (25) and (26), we can get S C 1 ( M , N ) = 0.7794 and S C 2 ( M , N ) = 0.6385 .

6. Application of New Similarity Measures in Multicriteria Decision-Making Problems

In this section, the new similarity measure is applied to a medical diagnosis problem. Next, we use the typical examples in [14] to verify the feasibility and effectiveness of the new similarity measure proposed in Section 5. Furthermore, we analyze the uniqueness of the new similarity measure by comparing the results with other similarity measures.
Assume that I = { I 1 , I 2 , I 3 , I 4 } represents 4 patients, set R = { R 1 , R 2 , R 3 , R 4 } = {viral fever, tuberculosis, typhoid, throat disease} indicates 4 diseases, and set S = { S 1 , S 2 , S 3 , S 4 } = {temperature, cough, sore throat, headache, body pain} indicates 5 symptoms. In medical diagnosis, in order to obtain a more accurate diagnosis, the doctor collects symptom information for the same patient at different times of the day. Therefore, we use the following SVNMSs to indicate the affiliation between the patient and the symptom:
I 1 = { S 1 , ( 0.8 , 0.6 , 05 ) , ( 0.3 , 0.2 , 0.1 ) , ( 0.4 , 0.2 , 0.1 ) , S 2 , ( 0.5 , 0.4 , 0.3 ) , ( 0.4 , 0.4 , 0.3 ) , ( 0.6 , 0.3 , 0.4 ) S 3 , ( 0.2 , 0.1 , 0.0 ) , ( 0.3 , 0.2 , 0.2 ) , ( 0.8 , 0.7 , 0.7 ) , S 4 , ( 0.7 , 0.6 , 0.5 ) , ( 0.3 , 0.2 , 0.1 ) , ( 0.4 , 0.3 , 0.2 ) S 5 , ( 0.4 , 0.3 , 0.2 ) , ( 0.6 , 0.5 , 0.5 ) , ( 0.6 , 0.4 , 0.4 ) ; I 2 = { S 1 , ( 0.5 , 0.4 , 0.3 ) , ( 0.3 , 0.3 , 0.2 ) , ( 0.5 , 0.4 , 0.4 ) , S 2 , ( 0.9 , 0.8 , 0.7 ) , ( 0.2 , 0.1 , 0.1 ) , ( 0.2 , 0.1 , 0.0 ) S 3 , ( 0.6 , 0.5 , 0.4 ) , ( 0.3 , 0.2 , 0.2 ) , ( 0.4 , 0.3 , 0.3 ) , S 4 , ( 0.6 , 0.4 , 0.3 ) , ( 0.3 , 0.1 , 0.1 ) , ( 0.7 , 0.7 , 0.3 ) S 5 , ( 0.8 , 0.7 , 0.5 ) , ( 0.4 , 0.3 , 0.1 ) , ( 0.3 , 0.2 , 0.1 ) ; I 3 = { S 1 , ( 0.2 , 0.1 , 0.1 ) , ( 0.3 , 0.2 , 0.2 ) , ( 0.8 , 0.7 , 0.6 ) , S 2 , ( 0.3 , 0.2 , 0.2 ) , ( 0.4 , 0.2 , 0.2 ) , ( 0.7 , 0.6 , 0.5 ) S 3 , ( 0.8 , 0.8 , 0.7 ) , ( 0.2 , 0.2 , 0.2 ) , ( 0.1 , 0.1 , 0.0 ) , S 4 , ( 0.3 , 0.2 , 0.2 ) , ( 0.3 , 0.3 , 0.3 ) , ( 0.7 , 0.6 , 0.6 ) S 5 , ( 0.4 , 0.4 , 0.3 ) , ( 0.4 , 0.3 , 0.2 ) , ( 0.7 , 0.7 , 0.5 ) ; I 4 = { S 1 , ( 0.5 , 0.5 , 0.4 ) , ( 0.3 , 0.2 , 0.2 ) , ( 0.4 , 0.4 , 0.3 ) , S 2 , ( 0.4 , 0.3 , 0.1 ) , ( 0.4 , 0.3 , 0.2 ) , ( 0.7 , 0.5 , 0.3 ) S 3 , ( 0.7 , 0.1 , 0.0 ) , ( 0.4 , 0.3 , 0.3 ) , ( 0.7 , 0.7 , 0.6 ) , S 4 , ( 0.6 , 0.5 , 0.3 ) , ( 0.6 , 0.2 , 0.1 ) , ( 0.6 , 0.4 , 0.3 ) S 5 , ( 0.5 , 0.1 , 0.1 ) , ( 0.3 , 0.3 , 0.2 ) , ( 0.6 , 0.5 , 0.4 ) .
Then, the affiliation between the symptoms and the disease is represented by the following SVNMSs:
R 1 = { S 1 , 0.8 , 0.1 , 0.1 , S 2 , 0.2 , 0.7 , 0.1 , S 3 , 0.3 , 0.5 , 0.2 , S 4 , 0.5 , 0.3 , 0.2 , S 5 , 0.5 , 0.4 , 0.1 } ; R 2 = { S 1 , 0.2 , 0.7 , 0.1 , S 2 , 0.9 , 0.0 , 0.1 , S 3 , 0.7 , 0.2 , 0.1 , S 4 , 0.6 , 0.3 , 0.1 , S 4 , 0.7 , 0.2 , 0.1 } ; R 3 = { S 1 , 0.5 , 0.3 , 0.2 , S 2 , 0.3 , 0.5 , 0.2 , S 3 , 0.2 , 0.7 , 0.1 , S 4 , 0.2 , 0.6 , 0.2 , S 5 , 0.4 , 0.4 , 0.2 } ; R 4 = { S 1 , 0.1 , 0.7 , 0.2 , S 2 , 0.3 , 0.6 , 0.1 , S 3 , 0.8 , 0.1 , 0.1 , S 4 , 0.1 , 0.8 , 0.1 , S 5 , 0.1 , 0.8 , 0.1 } .
Then, by Definition 14, we use Equations (24) and (25) to get the similarity S C 1 ( I i , R j ) between each patient I i ( i = 1 , 2 , 3 , 4 ) and disease R j ( j = 1 , 2 , 3 , 4 ) , which are shown in Table 1. Similarly, we use Equations (24) and (26) to get the similarity S C 2 ( I i , R j ) between each patient I i ( i = 1 , 2 , 3 , 4 ) and disease R j ( j = 1 , 2 , 3 , 4 ) , which are shown in Table 2.
It is well known that the closeness of the relationship between two SVNMSs can be described by the similarity between the two, that is, the greater the similarity, the closer the relationship is. As can be seen from Table 1 and Table 2, for these four diseases, by comparison, we can determine the most similar disease to each patient and get the get the most realistic diagnosis: patient I 1 suffers from typhoid, patient I 2 suffers from tuberculosis, patient I 3 suffers from throat disease, and patient I 4 also suffers from typhoid.
The dice similarity measures proposed in [11] are applied to the decision-making example, and the diagnosis is that patient I 1 suffers from typhoid, patient I 2 suffers from viral fever, patient I 3 suffers from typhoid, and patient I 4 suffers from tuberculosis. The distance-based similarity measures proposed in [14] also are applied in this decision-making example, and the diagnosis is that patient I 1 suffers from viral fever, patient I 2 suffers from tuberculosis, patient I 3 suffers from typhoid, and patient I 4 suffers from typhoid.
By analyzing and comparing the diagnostic results obtained by the three methods, we found that when using the new similarity to calculate, the diagnosis of disease in patient I 1 is consistent with [11] and the diagnosis of patients I 2 and I 4 was consistent with [14], indicating that this method is more effective, because the results are closer to the actual situation.
According to the above comparative analysis, the method proposed in this paper has the following advantages: (1) The new similarity measure under the SVNMSs environment can deal with the indeterminacy and inconsistent information which exists in decision-making problems, that is, it can be effectively used in many practical applications. (2) The new similarity measure is based on the cut sets, with the decomposition theorem and the representation theorem as the main ideas, and the integral as the main mathematical tool. Therefore, it has a solid mathematical theoretical basis. (3) This method can make full use of all the information of SVNMSs, and use the idea of splitting and summing to simplify complex problem, provide a simple and effective method for solving practical problems.

7. Conclusions

This paper first systematically discussed 8 properties of the union, intersection and complement of the single-valued neutrosophic multisets (SVNMSs), and showed that the complementation is no longer true in SVNMS by the counterexample. Secondly, this paper proposed the notions of cut sets and strong cut sets of SVNMSs and presented the related properties. On the basis of cut set sand strong cut sets, the decomposition theorem and representation theorem of SVNMSs were established and proved. The decomposition theorem realizes the transformation of SVNMSs and special SVNMSs. Thirdly, based on the decomposition theorem, we transformed the similarity between SVNMSs into the similarity between special SVNMSs. Therefore, we used the integral to give a new method to calculate the similarity between SVNMSs. The conceptions of new similarity measures were introduced, and its feasibility and effectiveness in multi-attribute decision making were verified accordinng to a typical example. Further, the uniqueness of the new similarity measure was analyzed by comparing the results with other similarity measures. The results obtained have a significant meaning for further theoretical research of SVNMSs. As the next research topic, we will explore the fuzzy measure and fuzzy integral of SVNMSs. In the future, we will discuss the integration of the related topics, such as neutrosophic set (multiset), fuzzy set (multiset), rough set, soft set and algebra systems (see [30,31,32,37,38,39]).

Author Contributions

All authors have contributed equally to this paper. The original idea of the study was proposed by X.Z., he also completed the preparation of the paper; X.Z. and Q.H conceived and designed the experiment; Q.H analyzed the experimental data and wrote the paper; The revision and submission of the paper was completed by X.Z. and Q.H.

Funding

This research was funded by National Natural Science Foundation of China grant number 61573240.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis and Synthetic Analysis; American Research Press: Santa Fe, NM, USA, 1998. [Google Scholar]
  2. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  3. Wang, H.; Smarandache, F.; Sunderraman, R.; Zhang, Y.Q. Interval neutrosophic sets and logic: Theory and applications in computing. Comput. Sci. 2012, 65, 87. [Google Scholar]
  4. Pramanik, S.; Pramanik, S.; Giri, B.C. TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment. Neural Comput. Appl. 2016, 27, 727–737. [Google Scholar]
  5. Liu, P.; Wang, Y. Multiple attribute decision-making method based on single-valued neutrosophic normalized weighted Bonferroni mean. Neural Comput. Appl. 2014, 25, 2001–2010. [Google Scholar] [CrossRef]
  6. Ye, J. Multicriteria decision-making method using the correlation coefficient under single-valued neutrosophic environment. Int. J. Gen. Syst. 2013, 42, 386–394. [Google Scholar] [CrossRef]
  7. Ye, J. Single valued neutrosophic cross-entropy for multicriteria decision making problems. Appl. Math. Model. 2014, 38, 1170–1175. [Google Scholar] [CrossRef]
  8. Pramanik, S.; Dalapati, S.; Alam, S.; Roy, T.K. NS-Cross entropy-based MAGDM under single-valued neutrosophic set environment. Inf. Polity 2018, 9, 37. [Google Scholar]
  9. Ye, J. Single-valued neutrosophic minimum spanning tree and its clustering method. J. Intell. Syst. 2014, 23, 311–324. [Google Scholar] [CrossRef]
  10. Yager, R. On the theory of bags. Int. J. Gen. Syst. 1986, 13, 23–27. [Google Scholar] [CrossRef]
  11. Ye, S.; Ye, J. Dice similarity measure between single valued neutrosophic multisets and its application in medical diagnosis. Neutrosophic Sets Syst. 2014, 6, 48–53. [Google Scholar]
  12. Miyamoto, S. Fuzzy multisets and their generalizations. Workshop Multisets Process 2000, 353, 225–236. [Google Scholar]
  13. Miyamoto, S. Multisets and fuzzy multisets as a framework of information systems. Model. Decis. Artif. Intell. 2004, 3131, 27–40. [Google Scholar]
  14. Ye, S.; Fu, J.; Ye, J. Medical diagnosis using distance-based similarity measures of single valued neutrosophic multisets. Neutrosophic Sets Syst. 2015, 7, 47–52. [Google Scholar]
  15. Pramanik, S.; Dey, P.P.; Giri, B.C. Hybrid vector similarity measure of single valued refined neutrosophic sets to multi-attribute decision making problems. Neural Comput. Appl. 2015, 28, 1–14. [Google Scholar]
  16. Fan, C.; Ye, J. The cosine measure of refined-single valued neutrosophic sets and refined-interval neutrosophic sets for multiple attribute decision-making. J. Intell. Fuzzy Syst. 2017, 33, 2281–2289. [Google Scholar] [CrossRef]
  17. Garg, H.; Nancy. Some new biparametric distance measures on single-valued neutrosophic sets with applications to pattern recognition and medical diagnosis. Information 2017, 8, 162. [Google Scholar] [CrossRef]
  18. Ye, J. Single-valued neutrosophic similarity measures based on cotangent function and their application in the fault diagnosis of steam turbine. Soft Comput. 2017, 21, 817–825. [Google Scholar] [CrossRef]
  19. Zhang, X.; Pei, D.; Dai, J. Fuzzy mathematics and rough set theory. Beijing Tsinghua Univ. Process. 2013, 20–59, 100–204. [Google Scholar]
  20. Szmidt, E.; Kacprzyk, J. Distances between intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 114, 505–518. [Google Scholar] [CrossRef]
  21. Li, M. Cut sets of intuitionistic fuzzy sets. J. Liaoning Norm. Univ. 2002, 1, 28–33. [Google Scholar]
  22. Yuan, X.; Li, H.; Sun, K. The cut sets, decomposition theorems and representation theorems on intuitionistic fuzzy sets and interval valued fuzzy sets. Sci. China (Inf. Sci.) 2011, 54, 91–110. [Google Scholar] [CrossRef]
  23. Zhang, C.; Guo, J.; Zhao, Z.; Pei, B. Decomposition theorem of interval-valued fuzzy sets and calculation of similarity measure. J. Liaoning Tech. Univ. (Nat. Sci.) 2015, 34, 1312–1315. [Google Scholar]
  24. Yuan, X.; Li, H.; Lee, E.S. Three new cut sets of fuzzy sets and new theories of fuzzy sets. Comput. Math. Appl. 2009, 57, 691–701. [Google Scholar] [CrossRef] [Green Version]
  25. Luka, G.; Vadim, K.; Makarov, K.A.; Kreå¡Imir, V. Representation theorems for indefinite quadratic forms revisited. Mathematika 2013, 59, 169–189. [Google Scholar]
  26. Alcantud, J.C.R.; Torra, V. Decomposition theorems and extension principles for hesitant fuzzy sets. Inf. Fusion 2018, 41, 48–56. [Google Scholar] [CrossRef]
  27. Li, J.; Li, H. The cut sets, decomposition theorems and representation theorems on R¯-fuzzy sets. Int. J. Inf. Syst. Sci. 2010, 6, 61–71. [Google Scholar]
  28. Wang, F.; Zhang, C.; Xia, X. Equivalence of the cut sets-based decomposition theorems and representation theorems on intuitionistic fuzzy sets and interval-valued fuzzy sets. Math. Comput. Model. 2013, 57, 1364–1370. [Google Scholar] [CrossRef]
  29. Singh, D.; Alkali, A.; Isah, A. Some application of α-cuts in fuzzy multisets theory. J. Emerg. Trends Comput. Inf. Sci. 2014, 5, 328–335. [Google Scholar]
  30. Zhang, X.H.; Bo, C.X.; Smarandache, F.; Dai, J.H. New inclusion relation of neutrosophic sets with applications and related lattice structure. Int. J. Mach. Learn. Cybern. 2018, 9, 1753–1763. [Google Scholar] [CrossRef]
  31. Zhang, X.H.; Bo, C.X.; Smarandache, F.; Park, C. New operations of totally dependent-neutrosophic sets and totally dependent-neutrosophic soft sets. Symmetry 2018, 10, 187. [Google Scholar] [CrossRef]
  32. Zhang, X.H.; Liang, X.; Smarandache, F. Neutrosophic duplet semi-group and cancellable neutrosophic triplet groups. Symmetry 2017, 9. [Google Scholar] [CrossRef]
  33. Medina, J.; Ojeda-Aciego, M. Multi-adjoint t-concept lattices. Inf. Sci. 2010, 180, 712–725. [Google Scholar] [CrossRef]
  34. Pozna, C.; Minculete, N.; Precup, R.E.; Kóczy, L.T.; Ballagi, A. Signatures: Definitions, operators and applications to fuzzy modelling. Fuzzy Sets Syst. 2012, 201, 86–104. [Google Scholar] [CrossRef]
  35. Nowaková, J.; Prílepok, M.; Snášel, V. Correction to: Medical image retrieval using vector quantization and fuzzy s-tree. J. Med. Syst. 2017, 41, 18. [Google Scholar] [CrossRef] [PubMed]
  36. Kumar, A.; Kumar, D.; Jarial, S.K. A hybrid clustering method based on improved artificial bee colony and fuzzy C-means algorithm. Int. J. Artif. Intell. 2017, 15, 40–60. [Google Scholar]
  37. Zhang, X.H. Fuzzy anti-grouped filters and fuzzy normal filters in pseudo-BCI algebras. J. Intel. Fuzzy Syst. 2017, 33, 1767–1774. [Google Scholar] [CrossRef]
  38. Zhang, X.H.; Park, C.; Wu, S.P. Soft set theoretical approach to pseudo-BCI algebras. J. Intell. Fuzzy Syst. 2018, 34, 559–568. [Google Scholar] [CrossRef]
  39. Ma, X.L.; Zhan, J.M.; Ali, M.I. A survey of decision making methods based on two classes of hybrid soft set models. Artif. Intell. Rev. 2018, 49, 511–529. [Google Scholar] [CrossRef]
Table 1. Similarity values of S C 1 ( I i , R j ) .
Table 1. Similarity values of S C 1 ( I i , R j ) .
R 1
(Viral fever)
R 2
(Tuberculosis)
R 3
(Typhoid)
R 4
(Troat Disease)
I 1 0.69270.66160.69340.6694
I 2 0.64170.66320.64580.6414
I 3 0.68960.68200.68810.7011
I 4 0.69660.68500.71560.6923
Table 2. Similarity values of S C 2 ( I i , R j ) .
Table 2. Similarity values of S C 2 ( I i , R j ) .
R 1
(Viral fever)
R 2
(Tuberculosis)
R 3
(Typhoid)
R 4
(Troat Disease)
I 1 0.52990.49430.53070.5031
I 2 0.47240.49610.47690.4721
I 3 0.52630.51750.52450.5398
I 4 0.53440.52090.55710.5294

Share and Cite

MDPI and ACS Style

Hu, Q.; Zhang, X. New Similarity Measures of Single-Valued Neutrosophic Multisets Based on the Decomposition Theorem and Its Application in Medical Diagnosis. Symmetry 2018, 10, 466. https://doi.org/10.3390/sym10100466

AMA Style

Hu Q, Zhang X. New Similarity Measures of Single-Valued Neutrosophic Multisets Based on the Decomposition Theorem and Its Application in Medical Diagnosis. Symmetry. 2018; 10(10):466. https://doi.org/10.3390/sym10100466

Chicago/Turabian Style

Hu, Qingqing, and Xiaohong Zhang. 2018. "New Similarity Measures of Single-Valued Neutrosophic Multisets Based on the Decomposition Theorem and Its Application in Medical Diagnosis" Symmetry 10, no. 10: 466. https://doi.org/10.3390/sym10100466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop