Next Article in Journal
Generic Properties of Framed Rectifying Curves
Previous Article in Journal
Complex Intuitionistic Fuzzy Graphs with Application in Cellular Network Provider Companies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

(T, S)-Based Single-Valued Neutrosophic Number Equivalence Matrix and Clustering Method

School of Mathematics and Statistics, Minnan Normal University, Zhangzhou 363000, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 36; https://doi.org/10.3390/math7010036
Submission received: 10 November 2018 / Revised: 23 December 2018 / Accepted: 28 December 2018 / Published: 2 January 2019

Abstract

:
Fuzzy clustering is widely used in business, biology, geography, coding for the internet and more. A single-valued neutrosophic set is a generalized fuzzy set, and its clustering algorithm has attracted more and more attention. An equivalence matrix is a common tool in clustering algorithms. At present, there exist no results constructing a single-valued neutrosophic number equivalence matrix using t-norm and t-conorm. First, the concept of a ( T , S ) -based composition matrix is defined in this paper, where ( T , S ) is a dual pair of triangular modules. Then, a ( T , S ) -based single-valued neutrosophic number equivalence matrix is given. A λ -cutting matrix of single-valued neutrosophic number matrix is also introduced. Moreover, their related properties are studied. Finally, an example and comparison experiment are given to illustrate the effectiveness and superiority of our proposed clustering algorithm.

1. Introduction

In 1965, Zadeh [1] proposed the concept of a fuzzy set (FS) for dealing with uncertain information. Intuitionistic fuzzy sets (IFS) were introduced by Atanassov [2] in 1986. Different from a FS, an IFS considers membership functions, non-membership functions and hesitant functions, which is more suitable for expressing fuzzy information, such as a voting system. To consider more information, Atanassov and Gargov [3] extended IFSs to interval-valued intuitionistic fuzzy sets (IVIFS). However, there are many situations with indeterminate and inconsistent information in application, which can not be handled by FS, IFS or IVIFS. For example, for a phenomenon in which quantum particles are neither existent nor non-existent, a better description can be obtained by using the “neutrosophic” attribute instead of the “fuzzy” attribute. Therefore, the concept of a neutrosophic set (NS) was developed by Smarandache [4]. Wang et al. [5] proposed the single-valued neutrosophic set (SVNS), which can be applied with more flexibly. In recent years, FS, IFS, IVIFS and SVNS have been widely applied in medical diagnosis [6,7], and decision making [8,9,10,11,12].
Clustering is a process of clustering similar things into the same category [13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. For FS, Seising [17] presented the interwoven historical developments of the two mathematical theories, which opened up into fuzzy pattern classification and fuzzy clustering. Chen et al. [18] summed up the FS equivalence matrix clustering algorithm. For IFS, Zhang et al. [19] proposed an intuitionistic fuzzy equivalence matrix clustering algorithm. Guan et al. [20] proposed the intuitionistic fuzzy agglomerative hierarchical clustering algorithm for recommendation using social tagging. Combining the metaheuristic with kernel intuitionistic fuzzy c-means algorithm, Kuo et al. [21] introduced an evolutionary-based clustering algorithm. Xu et al. [22] developed an orthogonal clustering algorithm for intuitionistic fuzzy information. Zhao et al. [23] presented an intuitionistic fuzzy minimum spanning tree clustering algorithm. For IVIFS, Sahu et al. [24] developed a procedure for constructing a hierarchical clustering for IVIFS max-min similarity relations. Zhang [25] introduced a c-means clustering algorithm in IVIFS. For SVNS, Ye [26] developed a fuzzy equivalence matrix clustering algorithm for SVNS. Guo et al. [27] presented a novel image segmentation algorithm based on neutrosophic c-means clustering and the indeterminacy filtering method. Ali et al. [28] proposed a SVNS orthogonal algorithm for segmentation of dental X-Ray images. Ye [29] introduced a minimum spanning tree clustering algorithm in SVNS.
The triangular module [30,31,32] is a useful integration tool and is widely used in information integration and clustering. Based on the triangular module, the IFS operation principle [33], the specific intuitionistic fuzzy aggregation operator [34], the generalized intuitionistic fuzzy interactive geometric interaction operator [35], the intuitionistic fuzzy heronian mean operator [36], the single-valued neutrosophic number weighted averaging operator and the weighted geometric operator [37] are given by many researchers. Huang [38] introduced the ( T , S ) -based IVIF composition matrix, the ( T , S ) -based IVIF equivalence matrix and its application for clustering, where ( T , S ) is a dual triangular module.
At present, no one has studied the construction of an equivalence matrix based on the dual triangular module under an SVNS environment. Therefore, the main work of this paper is to propose the operation of the ( T , S ) -based composition matrix and ( T , S ) -based single-valued neutrosophic number equivalence matrix. By comparing with the methods in the literature [18,19,28], our method has the following advantages: (1) The classification results are stable when λ is in a certain range. (2) The methods of the literature [18,19] can only divide objects into three classifications, and the method of literature [28] can only divide objects into four classifications, while our clustering algorithm can divide objects into five classifications. (3) Many existing composition matrices are fuzzy matrix, while our proposed composition matrix is a single-valued neutrosophic number matrix, which can better preserve the primitiveness of the data.
The structure of the paper is as follows: In Section 2, some basic notions, operations and relations are reviewed. In Section 3, in the SVNS environment, the concepts of ( T , S ) -based composition matrix and ( T , S ) -based single-valued neutrosophic number equivalence matrix are defined, where ( T , S ) is a dual pair of triangular module. Their properties are investigated. In Section 4, a clustering algorithm with ( T , S ) -based single-valued neutrosophic number equivalence matrix is proposed. In Section 5, we give a numerical example to illustrate the effectiveness of the proposed method. Compared with other existing methods, our method has better classification ability and more reasonable classification results. In Section 6, we conclude this paper.

2. Preliminaries

In this section, we briefly introduce some of the basic definitions to be used in this paper. To facilitate reading, some concepts are abbreviated in this paper.
  • FS: fuzzy set
  • FEM: fuzzy equivalence matrix
  • IFS: intuitionistic fuzzy set
  • IVIFS: interval-valued intuitionistic fuzzy set
  • IFEM: intuitionistic fuzzy equivalence matrix
  • NS: neutrosophic set
  • SVNS: single-valued neutrosophic set
  • SVNN: single-valued neutrosophic number
  • SVNNM: single-valued neutrosophic number matrix
  • SVNNSM: single-valued neutrosophic number similarity matrix
  • ( T , S ) -SVNNEM: ( T , S ) -based single-valued neutrosophic number equivalence matrix
The NS was proposed in 1995 by Smarandache.
Definition 1
([4]). Let X be a universe of discourse, with a generic element in X denoted by x. A NS A in X is
A ( x ) = x | ( t A ( x ) , i A ( x ) , f A ( x ) ) ,
where t A ( x ) denotes the truth-membership function, i A ( x ) denotes the indeterminacy-membership function, and f A ( x ) denotes the falsity-membership function. t A ( x ) , i A ( x ) , and f A ( x ) are real standard or nonstandard subsets of ] 0 , 1 + [ . There is no limitation on the sum of t A ( x ) , i A ( x ) , and f A ( x ) , so 0 t A ( x ) + i A ( x ) + f A ( x ) 3 + .
To apply the NS, Wang et al. developed the concept of SVNS. Some relationships of SVNSs were discussed.
Definition 2
([5]). Let X be a universe of discourse, with a generic element in X denoted by x. A SVNS A in X is depicted by the following:
A ( x ) = x | ( t A ( x ) , i A ( x ) , f A ( x ) ) ,
where t A ( x ) denotes the truth-membership function, i A ( x ) denotes the indeterminacy-membership function, and f A ( x ) denotes the falsity-membership function. For each point x in X, we have t A ( x ) , i A ( x ) , f A ( x ) [ 0 , 1 ] and 0 t A ( x ) + i A ( x ) + f A ( x ) 3 . For convenience, we can use α = ( t , i , f ) to represent a SVNN.
Definition 3
([5]). Let A = { x , t A ( x ) , i A ( x ) , f A ( x ) | x X } and B = { x , t B ( x ) , i B ( x ) , f B ( x ) | x X } be two SVNSs. The following relations hold:
(1) 
Inclusion: A B if only if t A ( x ) t B ( x ) , i A ( x ) i B ( x ) , f A ( x ) f B ( x ) .
(2) 
Complement: A c = { x , f A ( x ) , 1 i A ( x ) , t A ( x ) | x X } .
(3) 
Union: A B = { x , t A ( x ) t B ( x ) , i A ( x ) i B ( x ) , f A ( x ) f B ( x ) | x X } .
(4) 
Intersection: A B = { x , t A ( x ) t B ( x ) , i A ( x ) i B ( x ) , f A ( x ) f B ( x ) | x X } .
In order to develop SVNNSM, the neutrosophic number similarity relation and similarity matrix are reviewed.
Definition 4
([28]). Let A and B be two NSs. The following
R = { ( a , b ) , t R ( a , b ) , i R ( a , b ) , f R ( a , b ) | a A , b B }
is called a neutrosophic relation, where t R : A × B [ 0 , 1 ] , i R : A × B [ 0 , 1 ] , f R : A × B [ 0 , 1 ] ; 0 t R ( a , b ) + i R ( a , b ) + f R ( a , b ) 3 , for any ( a , b ) A × B .
Definition 5
([28]). Let A and B be two NSs and R be a neutrosophic relation. If
(a) 
(Reflexivity) t R ( a , a ) = 1 , i R ( a , a ) = 0 , f R ( a , a ) = 0 , for any a A ;
(b) 
(Symmetry) t R ( a , b ) = t R ( b , a ) , i R ( a , b ) = i R ( b , a ) , f R ( a , b ) = f R ( b , a ) , for any ( a , b ) A × B .
Then R is called a neutrosophic similarity relation.
Definition 6
([28]). Given A and B be two NSs on a universe U and R is a neutrosophic similarity relation. A matrix M = ( r p q ) n × n is called a neutrosophic similarity matrix if r p q = R ( y p , y q ) , where p , q = 1 , 2 , , n and t p q , i p q , f p q are the truth, indeterminacy and falsehood membership values of an element ( y p , y q ) A × B respectively. Equivalently, R ( y p , y q ) = ( t p q , i p q , f p q ) that implies r p q = ( t p q , i p q , f p q ) .
The above introduction can be extended to the following conclusion.
If p i j = ( t i j , i i j , f i j ) ( i , j = 1 , 2 , , n ) are SVNNs, then P = ( p i j ) n × n is called a SVNNM, where p i j = ( t i j , i i j , f i j ) ( i , j = 1 , 2 , , n ) . Further, if P satisfies the following conditions:
(1)
Reflexivity: p i i = ( 1 , 0 , 0 ) ( i = 1 , 2 , , n ) ;
(2)
Symmetry: p i j = p j i , i.e., ( t i j , i i j , f i j ) = ( t j i , i j i , f j i ) ( i , j = 1 , 2 , , n ) .
Then P is a SVNNSM.
Klir and Yuan proposed the t-norm and t-conorm in 1995.
Definition 7
([30]). A function T : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] is called a triangular norm, if it satisfies the following conditions:
(1) 
T ( 0 , 0 ) = 0 , T ( 1 , 1 ) = 1 ;
(2) 
T ( x , y ) = T ( y , x ) , for any x and y;
(3) 
T ( x , T ( y , z ) ) = T ( T ( x , y ) , z ) , for any x , y and z;
(4) 
if x 1 x 2 , y 1 y 2 , then T ( x 1 , y 1 ) T ( x 2 , y 2 ) .
Furthermore, for any a [ 0 , 1 ] , T is a t-norm if T ( a , 1 ) = a , T is a t-conorm if T ( 0 , a ) = a . In this paper, denote T , S as the t-norm and t-conorm, respectively.
Lemma 1
([31]). For any x , y [ 0 , 1 ] , we have
(1) 
0 T ( x , y ) x y 1 , that is T and T ( 0 , y ) = 0 ;
(2) 
0 x y S ( x , y ) 1 , that is S and S ( 1 , y ) = 1 .
Definition 8
([31]). Let x [ 0 , 1 ] , x = 1 x is the complement of x. If T and S satisfy T ( x , y ) = S ( x , y ) , for any x, y [ 0 , 1 ] , then T and S is called a dual pair of triangular module.
Following, we call a dual pair of triangular modules as a dual triangular module for short, denoted by ( T , S ) .
Further, based on a dual triangular module, Smarandache proposed the concepts of generalized union and intersection.
Definition 9
([32]). Let α i = ( t i , i i , f i ) ( i = 1 , 2 ) be any two SVNNs, then the generalized union and intersection are defined as follows:
(1) 
α 1 α 2 = ( S ( t 1 , t 2 ) , T ( i 1 , i 2 ) , T ( f 1 , f 2 ) ) ;
(2) 
α 1 α 2 = ( T ( t 1 , t 2 ) , S ( i 1 , i 2 ) , S ( f 1 , f 2 ) ) .
Several commonly used dual triangular modules are listed below:
(1)
Type I (min and max t-norm and t-conorm): T ( a , b ) = m i n ( a , b ) , S ( a , b ) = m a x ( a , b ) ;
(2)
Type II (Algebraic t-norm and t-conorm): T ( a , b ) = a b , S ( a , b ) = a + b a b ;
(3)
Type III (Einstein t-norm and t-conorm): T ( a , b ) = a b 1 + ( 1 a ) ( 1 b ) , S ( a , b ) = a + b 1 + a b ;
(4)
Type IV (Hamacher t-norm and t-conorm): T ( a , b ) = a b γ + ( 1 γ ) ( a + b a b ) , S ( a , b ) = a + b a b ( 1 γ ) a b 1 ( 1 γ ) a b ( γ ( 0 , + ) ) .
Based on Definition 9 and the dual triangular modules of the type I-IV, the corresponding generalized union and intersection are expressed as: α 1 w α 2 , α 1 w α 2 ( w = I , I I , I I I , I V ) .

3. Main Results

In this section, we first study some properties of generalized unions and intersections. Moreover, we introduce the concept of a ( T , S ) -based SVNN composition matrix and investigate its properties. Finally, we develop the λ -cutting matrix and the ( T , S ) -SVNNEM. Their relationship is also studied.

3.1. Some Properties of Generalized Unions and Intersections

The properties of generalized unions and intersections have not been studied in literature [32]. So we first investigate some properties of generalized unions and intersections.
Some operational properties of generalized unions and intersections are proposed in Theorem 1.
Theorem 1.
Let α i = ( t i , i i , f i ) ( i = 1 , 2 , 3 ) be any three SVNNs, then
(1) 
α 1 α 2 = α 2 α 1 ;
(2) 
α 1 α 2 = α 2 α 1 ;
(3) 
( α 1 α 2 ) α 3 = α 1 ( α 2 α 3 ) ;
(4) 
( α 1 α 2 ) α 3 = α 1 ( α 2 α 3 ) ;
(5) 
( α 1 α 2 ) c = α 1 c α 2 c ;
(6) 
( α 1 α 2 ) c = α 1 c α 2 c .
Proof. 
(1) and (2) are obvious. We only prove (3) and (5). The proofs of (4) and (6) are analogous.
( 3 )   ( α 1 α 2 ) α 3 = ( S ( t 1 , t 2 ) , T ( i 1 , i 2 ) , T ( f 1 , f 2 ) ) ( t 3 , i 3 , f 3 ) = ( S ( S ( t 1 , t 2 ) , t 3 ) , T ( T ( i 1 , i 2 ) , i 3 ) , T ( T ( f 1 , f 2 ) , f 3 ) ) = ( S ( t 1 , S ( t 2 , t 3 ) ) , T ( i 1 , T ( i 2 , i 3 ) ) , T ( f 1 , T ( f 2 , f 3 ) ) ) = α 1 ( α 2 α 3 )
( 5 )   ( α 1 α 2 ) c = ( S ( t 1 , t 2 ) , T ( i 1 , i 2 ) , T ( f 1 , f 2 ) ) c = ( T ( f 1 , f 2 ) , 1 T ( i 1 , i 2 ) , S ( t 1 , t 2 ) ) . α 1 c α 2 c = ( f 1 , 1 i 1 , t 1 ) ( f 2 , 1 i 2 , t 2 ) = ( T ( f 1 , f 2 ) , S ( 1 i 1 , 1 i 2 ) , S ( t 1 , t 2 ) ) = ( T ( f 1 , f 2 ) , 1 T ( i 1 , i 2 ) , S ( t 1 , t 2 ) ) = ( α 1 α 2 ) c .
Therefore, we complete the proof. □
Based on (5) and (6) in Theorem 1, we have Corollary 1.
Corollary 1.
If { A τ : τ Λ } ( Λ = { 1 , 2 , , n } ) is a set of SVNNs, then
(1) 
( τ = 1 n α τ ) c = τ = 1 n α τ c ;
(2) 
( τ = 1 n α τ ) c = τ = 1 n α τ c .
Theorem 2.
Let α 1 , α 2 , α 3 be any three SVNNs, for type I dual triangular module, that is T ( a , b ) = m i n ( a , b ) , S ( a , b ) = m a x ( a , b ) , the following equations hold.
(1) 
( α 1 I α 2 ) I α 3 = ( α 1 I α 3 ) I ( α 2 I α 3 ) ;
(2) 
( α 1 I α 2 ) I α 3 = ( α 1 I α 3 ) I ( α 2 I α 3 ) .
Proof. 
According to Definition 9, when T ( a , b ) = m i n ( a , b ) , S ( a , b ) = m a x ( a , b ) , we obtain
( 1 ) ( α 1 I α 2 ) I α 3 = ( m i n { m a x { t 1 , t 2 } , t 3 } , m a x { m i n { i 1 , i 2 } , i 3 } , m a x { m i n { f 1 , f 2 } , f 3 } ) = ( m a x { m i n { t 1 , t 3 } , m i n { t 2 , t 3 } } , m i n { m a x { i 1 , i 3 } , m a x { i 2 , i 3 } } , m i n { m a x { f 1 , f 3 } , m a x { f 2 , f 3 } } ) = ( α 1 I α 3 ) I ( α 2 I α 3 ) .
and
( 2 ) ( α 1 I α 2 ) I α 3 = ( m a x { m i n { t 1 , t 2 } , t 3 } , m i n { m a x { i 1 , i 2 } , i 3 } , m i n { m a x { f 1 , f 2 } , f 3 } ) = ( m i n { m a x { t 1 , t 3 } , m a x { t 2 , t 3 } } , m a x { m i n { i 1 , i 3 } , m i n { i 2 , i 3 } } , m a x { m i n { f 1 , f 3 } , m i n { f 2 , f 3 } } ) = ( α 1 I α 3 ) I ( α 2 I α 3 ) .
Therefore, we complete the proof. □
For type II-IV dual triangular module, ( α 1 α 2 ) α 3 = ( α 1 α 3 ) ( α 2 α 3 ) and ( α 1 α 2 ) α 3 = ( α 1 α 3 ) ( α 2 α 3 ) do not hold. See Example 1.
Example 1.
Let α 1 = ( 0.7 , 0.4 , 0.1 ) , α 2 = ( 0.5 , 0.3 , 0.3 ) , α 3 = ( 0.4 , 0.8 , 0.2 ) be three SVNNs.
If T ( a , b ) = a b , S ( a , b ) = a + b a b , then
( α 1 I I α 2 ) I I α 3 = ( 0.34 , 0.824 , 0.224 ) ( 0.424 , 0.7568 , 0.1232 ) = ( α 1 I I α 3 ) I I ( α 2 I I α 3 ) .
( α 1 I I α 2 ) I I α 3 = ( 0.61 , 0.464 , 0.074 ) ( 0.574 , 0.4832 , 0.0788 ) = ( α 1 I I α 3 ) I I ( α 2 I I α 3 ) .
If T ( a , b ) = a b 1 + ( 1 a ) ( 1 b ) , S ( a , b ) = a + b 1 + a b , then
( α 1 I I I α 2 ) I I I α 3 = ( 0.3333 , 0.4687 , 0.2351 ) ( 0.3773 , 0.7983 , 0.101 ) = ( α 1 I I I α 3 ) I I I ( α 2 I I I α 3 ) .
( α 1 I I I α 2 ) I I I α 3 = ( 0.6279 , 0.4651 , 0.0521 ) ( 0.6227 , 0.4681 , 0.0501 ) = ( α 1 I I I α 3 ) I I I ( α 2 I I I α 3 ) .
If T ( a , b ) = a b γ + ( 1 γ ) ( a + b a b ) , S ( a , b ) = a + b a b ( 1 γ ) a b 1 ( 1 γ ) a b , λ = 4 , then
( α 1 I V α 2 ) I V α 3 = ( 0.3276 , 0.832 , 0.2131 ) ( 0.3077 , 0.8497 , 0.0857 ) = ( α 1 I V α 3 ) I V ( α 2 I V α 3 ) .
( α 1 I V α 2 ) I V α 3 = ( 0.6471 , 0.4665 , 0.0354 ) ( 0.6948 , 0.4323 , 0.029 ) = ( α 1 I V α 3 ) I V ( α 2 I V α 3 ) .

3.2. A ( T , S ) -Based Single-Valued Neutrosophic Number Composition Matrix and Its Properties

We introduce the new concept of a ( T , S ) -based composition matrix and its related properties below.
Definition 10.
Let P l = ( p i j l ) n × n ( l = 1 , 2 ) be two SVNNMs, where p i j l = ( t i j l , i i j l , f i j l ) ( i , j = 1 , 2 , , n ; l = 1 , 2 ) . Then P = P 1 × P 2 = ( p i j ) n × n is called a ( T , S ) -based composition matrix of P 1 and P 2 , where
p i j = k = 1 n ( p i k 1 p k j 2 ) = ( k = 1 n T ( t i k 1 , t k j 2 ) , k = 1 n S ( i i k 1 , i k j 2 ) , k = 1 n S ( f i k 1 , f k j 2 ) ) .
Based on Definition 10 and the dual triangular modules of the type I-IV, the corresponding ( T , S ) -based composition matrix is expressed as: P = P 1 × w P 2 ( w = I , I I , I I I , I V ) .
By Definitions 11 and 10, some properties of ( T , S ) -based composition matrix are studied.
Theorem 3.
Let P l = ( p i j l ) n × n ( l = 1 , 2 ) be two SVNNMs, where p i j l = ( t i j l , i i j l , f i j l ) ( i , j = 1 , 2 , , n ; l = 1 , 2 ) . Then the ( T , S ) -based composition matrix of P 1 and P 2 is still a SVNNM.
Proof. 
By
0 T ( t i k , t k j ) 1 ,   0 S ( i i k , i k j ) 1 ,   0 S ( f i k , f k j ) 1 ( k = 1 , 2 , , n ) ,
we have
0 k = 1 n T ( t i k , t k j ) 1 ,   0 k = 1 n S ( i i k , i k j ) 1 ,   0 k = 1 n S ( f i k , f k j ) 1 .
That means
0 k = 1 n T ( t i k , t k j ) + k = 1 n S ( i i k , i k j ) + k = 1 n S ( f i k , f k j ) 3 ,
which completes the proof. □
According to Theorem 3, we get Corollary 2.
Corollary 2.
Let P l ( l = 1 , 2 ) be any two SVNNSMs. Then the ( T , S ) -based composition matrix of P 1 and P 2 is also a SVNNM.
However, the ( T , S ) -based composition matrix of two SVNNSMs may not be a SVNNSM. See Example 2.
Example 2.
Let
P 1 = ( 1 , 0 , 0 ) ( 0.3 , 0.4 , 0.2 ) ( 0.5 , 0.4 , 0.7 ) ( 0.3 , 0.4 , 0.2 ) ( 1 , 0 , 0 ) ( 0.8 , 0.2 , 0.1 ) ( 0.5 , 0.4 , 0.7 ) ( 0.8 , 0.2 , 0.1 ) ( 1 , 0 , 0 ) .
P 2 = ( 1 , 0 , 0 ) ( 0.1 , 0.2 , 0.2 ) ( 0.4 , 0.5 , 0.2 ) ( 0.1 , 0.2 , 0.2 ) ( 1 , 0 , 0 ) ( 0.3 , 0.3 , 0.1 ) ( 0.4 , 0.5 , 0.2 ) ( 0.3 , 0.3 , 0.1 ) ( 1 , 0 , 0 ) .
Then both P 1 and P 2 are SVNNSMs. When we choose type I and II dual triangular modules, their ( T , S ) -based composition matrix is
P = P 1 × I P 2 = P 1 × I I P 2 = ( 1 , 0 , 0 ) ( 0.3 , 0.2 , 0.2 ) ( 0.5 , 0.4 , 0.2 ) ( 0.3 , 0.2 , 0.2 ) ( 1 , 0 , 0 ) ( 0.8 , 0.2 , 0.1 ) ( 0.5 , 0.36 , 0.2 ) ( 0.8 , 0.2 , 0.1 ) ( 1 , 0 , 0 ) .
p 13 p 31 which means P is not a SVNNSM.
Fortunately, if P is a SVNNSM, then ( T , S ) -based composition matrix of P and P is still a SVNNSM.
Theorem 4.
Let P = ( p i j ) n × n ( i , j = 1 , 2 , , n ) be a SVNNSM. Then the ( T , S ) -based composition matrix P 2 = P × P is also a SVNNSM.
Proof. 
Let P 2 = ( q i j ) n × n and p i j = ( t i j , i i j , f i j ) ( i , j = 1 , 2 , , n ) .
(1)
By Corollary 2, we know that P 2 is a SVNNM.
(2)
By Definition 10, for i = 1 , 2 , , n , we have
q i i = ( k = 1 n T ( t i k , t k i ) , k = 1 n S ( i i k , i k i ) , k = 1 n S ( f i k , f k i ) ) .
When k = i , we have p i i = ( t i i , i i i , f i i ) = ( 1 , 0 , 0 ) . Since T ( 1 , 1 ) = 1 and S ( 0 , 0 ) = 0 , we can get q i i = ( 1 , 0 , 0 ) ( i = 1 , 2 , , n ) .
(3)
Since P is a SVNNSM, p i k = p k i , i.e., ( t i k , i i k , f i k ) = ( t k i , i k i , f k i ) ( i , k = 1 , 2 , , n ) . Then we have
q i j = ( k = 1 n T ( t i k , t k j ) , k = 1 n S ( i i k , i k j ) , k = 1 n S ( f i k , f k j ) ) = ( k = 1 n T ( t k j , t i k ) , k = 1 n S ( i k j , i i k ) , k = 1 n S ( f k j , f i k ) ) = ( k = 1 n T ( t j k , t k i ) , k = 1 n S ( i j k , i k i ) , k = 1 n S ( f j k , f k i ) ) = q j i
Thus, P 2 is a SVNNSM. □
From definition 10 and Theorem 3, we can get Property 1.
Property 1.
Let P l = ( p i j l ) n × n ( l = 1 , 2 ) be any two SVNNMs. Then P 1 × P 2 = P 2 × P 1 .
The relationship of SVNNMs is defined in the following:
Definition 11.
Let P l = ( p i j l ) n × n ( l = 1 , 2 ) be any two SVNNMs, where p i j l = ( t i j l , i i j l , f i j l ) ( i , j = 1 , 2 , , n ; l = 1 , 2 ) . If p i j 1 p i j 2 , i.e., t i j 1 t i j 2 , i i j 1 t i j 2 , f i j 1 f i j 2 , then P 1 P 2 .
According to Definition 11, Theorem 3 and Lemma 1, we have the following property:
Property 2.
Let P l ( l = 1 , 2 , 3 ) be any three SVNNMs and P 1 P 2 . Then P 1 × P 3 P 2 × P 3 P 2 × I P 3 .

3.3. ( T , S ) -SVNNEM and λ -Cutting Matrix

In order to cluster, we develop ( T , S ) -SVNNEM and λ -cutting matrix. We also investigate their related properties.
Definition 12.
Let P = ( p i j ) n × n be a SVNNM, where p i j = ( t i j , i i j , f i j ) ( i , j = 1 , 2 , , n ) . If P satisfies the following conditions:
(1) 
Reflexivity: p i i = ( 1 , 0 , 0 ) ( i = 1 , 2 , , n ) ;
(2) 
Symmetry: p i j = p j i , i.e. ( t i j , i i j , f i j ) = ( t j i , i j i , f j i ) ( i , j = 1 , 2 , , n ) ;
(3) 
( T , S ) -Transitivity: P 2 = P × P P , i.e. k = 1 n T ( t i k , t k j ) t i j , k = 1 n S ( i i k , i k j ) i i j , k = 1 n S ( f i k , f k j ) f i j .
Then P is called a ( T , S ) -SVNNEM.
In general, a SVNNSM may not a ( T , S ) -SVNNEM.
Example 3.
Let
P = ( 1 , 0 , 0 ) ( 0.2 , 0.4 , 0.6 ) ( 0.9 , 0.5 , 0.7 ) ( 0.2 , 0.4 , 0.6 ) ( 1 , 0 , 0 ) ( 0.4 , 0.2 , 0.5 ) ( 0.3 , 0.5 , 0.7 ) ( 0.4 , 0.2 , 0.5 ) ( 1 , 0 , 0 ) ,
then P satisfies reflexivity and symmetry. For dual triangular modules of type I-IV, we obtain
P × I P = ( 1 , 0 , 0 ) ( 0.4 , 0.4 , 0.6 ) ( 0.9 , 0.4 , 0.6 ) ( 0.4 , 0.4 , 0.6 ) ( 1 , 0 , 0 ) ( 0.4 , 0.2 , 0.5 ) ( 0.9 , 0.4 , 0.6 ) ( 0.4 , 0.2 , 0.5 ) ( 1 , 0 , 0 ) P
P × I I P = ( 1 , 0 , 0 ) ( 0.36 , 0.4 , 0.6 ) ( 0.9 , 0.5 , 0.7 ) ( 0.36 , 0.4 , 0.6 ) ( 1 , 0 , 0 ) ( 0.4 , 0.2 , 0.5 ) ( 0.9 , 0.5 , 0.7 ) ( 0.4 , 0.2 , 0.5 ) ( 1 , 0 , 0 ) P
P × I I I P = ( 1 , 0 , 0 ) ( 0.34 , 0.4 , 0.6 ) ( 0.9 , 0.5 , 0.7 ) ( 0.36 , 0.4 , 0.6 ) ( 1 , 0 , 0 ) ( 0.4 , 0.2 , 0.5 ) ( 0.9 , 0.5 , 0.7 ) ( 0.4 , 0.2 , 0.5 ) ( 1 , 0 , 0 ) P
and
P × I V P = ( 1 , 0 , 0 ) ( 0.31 , 0.4 , 0.6 ) ( 0.9 , 0.5 , 0.7 ) ( 0.36 , 0.4 , 0.6 ) ( 1 , 0 , 0 ) ( 0.4 , 0.2 , 0.5 ) ( 0.9 , 0.5 , 0.7 ) ( 0.4 , 0.2 , 0.5 ) ( 1 , 0 , 0 ) P
where λ = 4 . It shows that P is not a ( T , S ) -SVNNEM for dual triangular modules of type I-IV.
In order to obtain clustering algorithm, we give the λ -cutting matrix.
Definition 13.
Let P = ( p i j ) n × n = ( t i j , i i j , f i j ) ( i , j = 1 , 2 , , n ) be a SVNNM and λ be a SVNN. We call P λ = ( ( p i j ) λ ) n × n is a λ-cutting matrix of P, where
( p i j ) λ = ( ( t i j ) λ , ( i i j ) λ , ( f i j ) λ ) = ( 1 , 0 , 0 ) , i f   λ p i j , ( 0 , 1 , 1 ) , i f   λ p i j .
Following, we will study the relationship between SVNNSM and the λ -cutting matrix of SVNNSM.
Theorem 5.
P = ( p i j ) n × n ( i , j = 1 , 2 , , n ) is a SVNNSM if and only if P λ is a SVNNSM for any SVNN λ.
Proof. 
Let P be a SVNNSM. Then p i i = ( 1 , 0 , 0 ) , p i j = p j i .
(1)
Reflexivity: Since p i i = ( 1 , 0 , 0 ) ( i = 1 , 2 , , n ) , we have λ p i i for each SVNN λ . By Definition 13, we get ( p i i ) λ = ( 1 , 0 , 0 ) .
(2)
Symmetry: By p i j = p j i , we can easily get that ( p i j ) λ = ( p j i ) λ ( i , j = 1 , 2 , , n ) for each SVNN λ .
That is, P λ is a SVNNSM.
Let P λ be a SVNNSM. Then ( P i i ) λ = ( 1 , 0 , 0 ) , ( p i j ) λ = ( p j i ) λ ( i , j = 1 , 2 , , n ) .
(1)
Reflexivity: Since ( P i i ) λ = ( 1 , 0 , 0 ) ( i = 1 , 2 , , n ) , we know that λ p i i = ( t i i , i i i , f i i ) for each SVNN λ = ( t , i , f ) , that is t t i i , i i i i and f f i i . Let λ = ( 1 , 0 , 0 ) , then p i i = ( 1 , 0 , 0 ) ( i = 1 , 2 , , n ) .
(2)
Symmetry: If p i j p j i , then t i j t j i or i i j i j i or f i j f j i . Suppose t i j < t j i , λ = ( t i j + t j i 2 , i j i , f j i ) . Then t i j < t i j + t j i 2 < t j i . So ( p i j ) λ = ( 0 , 1 , 1 ) , ( p j i ) λ = ( 1 , 0 , 0 ) . ( p i j ) λ ( p j i ) λ , which is a contradiction. Therefore, t i j = t j i ( i , j = 1 , 2 , , n ) .
Analogously, we can prove that i i j = i j i and f i j = f j i should be held simultaneously. It implies that p i j = p j i ( i , j = 1 , 2 , , n ) .
That is, P is a SVNNSM. □
When the condition and conclusion of Theorem 5 are strengthened, the Theorem 6 holds.
Theorem 6.
If P λ is a ( T , S ) -SVNNEM for any SVNN λ, then P is a ( T , S ) -SVNNEM.
Proof. 
Reflexivity and symmetry are proved in Theorem 5. Following, we prove P satisfies ( T , S ) -Transitivity.
( T , S ) -Transitivity: Since P λ = ( ( p i j ) λ ) n × n = ( ( t i j ) λ , ( i i j ) λ , ( f i j ) λ ) ( i , j = 1 , 2 , , n ) satisfies ( T , S ) -Transitivity, we have
k = 1 n T ( ( t i k ) λ , ( t k j ) λ ) ( t i j ) λ ,   k = 1 n S ( ( i i k ) λ , ( i k j ) λ ) ( i i j ) λ ,   k = 1 n S ( ( f i k ) λ , ( f k j ) λ ) ( f i j ) λ .
Now we will prove that
k = 1 n T ( t i k , t k j ) t i j ,   k = 1 n S ( i i k , i k j ) i i j ,   k = 1 n S ( f i k , f k j ) f i j .
Assume there exist i 0 and j 0 such that k = 1 n ( t i 0 k t k j 0 ) > t i 0 j 0 . Further we know that there exists l such that t i 0 l t l j 0 > t i 0 j 0 . Suppose t i 0 l t l j 0 > t i 0 j 0 and λ = ( t l j 0 , i i 0 l i l j 0 , f i 0 l f l j 0 ) , then
( p i 0 l ) λ = ( p l j 0 ) λ = ( 1 , 0 , 0 )   and   ( p i 0 j 0 ) λ = ( 0 , 1 , 1 )
hold. These imply that
k = 1 n T ( ( t i 0 k ) λ , ( t k j 0 ) λ ) = k = 1 n T ( 1 , 1 ) = 1 > 0 = ( t i 0 j 0 ) λ ,
which produces the contradiction with
k = 1 n T ( ( t i k ) λ , ( t k j ) λ ) ( t i j ) λ .
So we have
k = 1 n T ( t i k , t k j ) k = 1 n ( t i k t k j ) t i j .
Assume there exist i 1 and j 1 such that k = 1 n ( i i 1 k i k j 1 ) < i i 1 j 1 . Further we know that there exists m such that i i 1 m i m j 1 < i i 1 j 1 . Suppose i i 1 m i m j 1 < i i 1 j 1 and λ = ( t i 1 m t m j 1 , i m j 1 , f i 1 m f m j 1 ) , then
( p i 1 m ) λ = ( p m j 1 ) λ = ( 1 , 0 , 0 )   and   ( p i 1 j 1 ) λ = ( 0 , 1 , 1 )
hold. These imply that
k = 1 n S ( ( i i 1 k ) λ , ( i k j 1 ) λ ) = k = 1 n S ( 0 , 0 ) = 0 < 1 = ( i i 1 j 1 ) λ ,
which produces the contradiction with
k = 1 n S ( ( i i k ) λ , ( i k j ) λ ) ( i i j ) λ .
So we have
k = 1 n S ( i i k , i k j ) k = 1 n ( i i k i k j ) i i j .
Analogously we can prove that k = 1 n S ( f i k , f k j ) f i j .
Thus, we can complete the proof. □
Conversely, if P is a ( T , S ) -SVNNEM, then P λ may not a ( T , S ) -SVNNEM. See Example 4, where we choose dual triangular module of type II for explanation.
Example 4.
Let
P = ( 1 , 0 , 0 ) ( 0.3 , 0.2 , 0.1 ) ( 0.6 , 0.5 , 0.2 ) ( 0.3 , 0.2 , 0.1 ) ( 1 , 0 , 0 ) ( 0.7 , 0.4 , 0.2 ) ( 0.6 , 0.5 , 0.2 ) ( 0.7 , 0.4 , 0.2 ) ( 1 , 0 , 0 )
be a SVNNSM.
P × I I P = ( 1 , 0 , 0 ) ( 0.3 , 0.2 , 0.1 ) ( 0.6 , 0.5 , 0.2 ) ( 0.3 , 0.2 , 0.1 ) ( 1 , 0 , 0 ) ( 0.7 , 0.4 , 0.2 ) ( 0.6 , 0.5 , 0.2 ) ( 0.7 , 0.4 , 0.2 ) ( 1 , 0 , 0 ) = P .
P satisfies ( T , S ) -Transitivity, where we choose T ( a , b ) = a b , S ( a , b ) = a + b a b . Therefore, P is a ( T , S ) -SVNNEM. When λ = ( 0.6 , 0.5 , 0.2 ) , we have
P λ = ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) .
Then
P λ 2 = P λ × I I P λ = ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) .
( p 12 2 ) λ = ( 1 , 0 , 0 ) ( p 12 ) λ = ( 0 , 1 , 1 ) . It shows that P λ 2 P λ . So P λ is not a ( T , S ) -SVNNEM.
By the idea of Ye in literature [26], we can easily have Theorem 7.
Theorem 7.
Let P be a SVNNSM, then after a finite of ( T , S ) -based compositions: P P 2 P 2 k , there exists a positive integer k such that P 2 k = P 2 k + 1 . Moreover, P 2 k is a ( T , S ) -SVNNEM.

4. A Algorithm for Single-Valued Neutrosophic Number Clustering

This section proposes a clustering algorithm based on ( T , S ) -SVNNEM under a single-valued neutrosophic environment.
Let Y = { y 1 , y 2 , , y n } be a finite set of alternatives, and X = { x 1 , x 2 , , x m } be the set of attributes. Suppose the characteristics of the objects y i ( i = 1 , 2 , , n ) with respect to the attributes x j ( j = 1 , 2 , , m ) are expressed as r i j = ( t i j , i i j , f i j ) . The decision matrix R = ( r i j ) n × m is a SVNNM.
Step 1 Calculate the similarity measure for each pair of y i and y k ( i , k = 1 , 2 , , n ) by S i k = S i m ( r i j , r k j ) = ( S i m t ( r i j , r k j ) , S i m i ( r i j , r k j ) , S i m f ( r i j , r k j ) ) (see literature [11]):
S i m t ( r i j , r k j ) = 1 j = 1 m | t r i j ( x i ) t r k j ( x i ) | j = 1 m ( t r i j ( x i ) + t r k j ( x i ) ) ; S i m i ( r i j , r k j ) = j = 1 m | i r i j ( x i ) i r k j ( x i ) | j = 1 m ( i r i j ( x i ) + i r k j ( x i ) ) ; S i m f ( r i j , r k j ) = j = 1 m | f r i j ( x i ) f r k j ( x i ) | j = 1 m ( f r i j ( x i ) + f r k j ( x i ) ) ,
where x j X , j = 1 , 2 , , m . Then a SVNNSM P = ( S i k ) n × n is constructed.
Step 2 Choose appropriate ( T , S ) dual triangular modual. Then we verify whether P is a ( T , S ) -SVNNEM or not. If not, stop calculating the ( T , S ) -based compositions P P 2 P 2 k until P 2 k = P 2 k + 1 . We obtain a ( T , S ) -SVNNEM P 2 k .
Step 3 Take an appropriate interval of λ , and calculate ( P 2 k ) λ by Definition 13. Based on ( P 2 k ) λ , if ( p i j ) λ = ( p t j ) λ ( j = 1 , 2 , , n ) , then y i and y t can be divided into the same category.

5. Illustrative Example and Comparative Analysis

In this section, we give an example to show the effectiveness and rationality of our proposed method, and demonstrate its superiority by comparing it with other existing methods.

5.1. Illustrative Example

An example is given by adapting from Zhang et al.(2007). We will use this example to illustrate the effectiveness and rationality of the proposed ( T , S ) -SVNNEM clustering algorithm.
Consider a car classification problem [19]. There are five new cars y i ( i = 1 , 2 , 3 , 4 , 5 ) to be classified. Every car has six evaluation attributes: (1) x 1 : Fuel economy; (2) x 2 : Coefficient of friction; (3) x 3 : Price; (4) x 4 : Comfort; (5) x 5 : Design; (6) x 6 : Safety. The characteristics of the five new cars under the six attributes are represented by the form of SVNNs, shown in Table 1.
Step 1 Calculate the similarity measures for each pair of y i and y k ( i , k = 1 , 2 , 3 , 4 , 5 ) by similarity measure S i m i k = S i m ( r i j , r k j ) = ( S i m t ( r i j , r k j ) , S i m i ( r i j , r k j ) , S i m f ( r i j , r k j ) ) , and then get the SVNNSM P.
P = ( 1 , 0 , 0 ) ( 0.83 , 0.25 , 0.17 ) ( 0.81 , 0.35 , 0.2 ) ( 0.81 , 0.26 , 0.25 ) ( 0.78 , 0.26 , 0.35 ) ( 0.83 , 0.25 , 0.17 ) ( 1 , 0 , 0 ) ( 0.85 , 0.4 , 0.16 ) ( 0.8 , 0.29 , 0.21 ) ( 0.89 , 0.35 , 0.33 ) ( 0.81 , 0.35 , 0.2 ) ( 0.85 , 0.4 , 0.16 ) ( 1 , 0 , 0 ) ( 0.71 , 0.29 , 0.11 ) ( 0.81 , 0.29 , 0.44 ) ( 0.81 , 0.26 , 0.25 ) ( 0.8 , 0.29 , 0.21 ) ( 0.71 , 0.29 , 0.11 ) ( 1 , 0 , 0 ) ( 0.78 , 0.38 , 0.51 ) ( 0.78 , 0.26 , 0.35 ) ( 0.89 , 0.35 , 0.33 ) ( 0.81 , 0.29 , 0.44 ) ( 0.78 , 0.38 , 0.51 ) ( 1 , 0 , 0 )
Step 2 Choose T ( a , b ) = a b and S ( a , b ) = a + b a b , then calculate P 2 = P × I I P .
P 2 = ( 1 , 0 , 0 ) ( 0.83 , 0.25 , 0.17 ) ( 0.81 , 0.35 , 0.2 ) ( 0.81 , 0.26 , 0.25 ) ( 0.78 , 0.26 , 0.35 ) ( 0.83 , 0.25 , 0.17 ) ( 1 , 0 , 0 ) ( 0.85 , 0.4 , 0.16 ) ( 0.8 , 0.29 , 0.21 ) ( 0.89 , 0.35 , 0.33 ) ( 0.81 , 0.35 , 0.2 ) ( 0.85 , 0.4 , 0.16 ) ( 1 , 0 , 0 ) ( 0.71 , 0.29 , 0.11 ) ( 0.81 , 0.29 , 0.4372 ) ( 0.81 , 0.26 , 0.25 ) ( 0.8 , 0.29 , 0.21 ) ( 0.71 , 0.29 , 0.11 ) ( 1 , 0 , 0 ) ( 0.78 , 0.38 , 0.4707 ) ( 0.78 , 0.26 , 0.35 ) ( 0.89 , 0.35 , 0.33 ) ( 0.81 , 0.29 , 0.4372 ) ( 0.78 , 0.38 , 0.4707 ) ( 1 , 0 , 0 )
Obviously, P 2 P , continually calculate P 4 = P 2 × I I P 2 .
P 4 = ( 1 , 0 , 0 ) ( 0.83 , 0.25 , 0.17 ) ( 0.81 , 0.35 , 0.2 ) ( 0.81 , 0.26 , 0.25 ) ( 0.78 , 0.26 , 0.35 ) ( 0.83 , 0.25 , 0.17 ) ( 1 , 0 , 0 ) ( 0.85 , 0.4 , 0.16 ) ( 0.8 , 0.29 , 0.21 ) ( 0.89 , 0.35 , 0.33 ) ( 0.81 , 0.35 , 0.2 ) ( 0.85 , 0.4 , 0.16 ) ( 1 , 0 , 0 ) ( 0.71 , 0.29 , 0.11 ) ( 0.81 , 0.29 , 0.4372 ) ( 0.81 , 0.26 , 0.25 ) ( 0.8 , 0.29 , 0.21 ) ( 0.71 , 0.29 , 0.11 ) ( 1 , 0 , 0 ) ( 0.78 , 0.38 , 0.4707 ) ( 0.78 , 0.26 , 0.35 ) ( 0.89 , 0.35 , 0.33 ) ( 0.81 , 0.29 , 0.4372 ) ( 0.78 , 0.38 , 0.4707 ) ( 1 , 0 , 0 )
It is clear that P 4 = P 2 , that is, P 2 is a ( T , S ) -SVNNEM.
Step 3 Let ( 0.83 , 0.25 , 0.17 ) < λ ( 1 , 0 , 0 ) . We have
( P 2 ) λ = ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 )
Then y i ( i = 1 , 2 , 3 , 4 , 5 ) can be divided into five categories: { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 } .
Let ( 0.71 , 0.29 , 0.29 ) < λ ( 0.83 , 0.25 , 0.17 ) . We have
( P 2 ) λ = ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 )
Then y i ( i = 1 , 2 , 3 , 4 , 5 ) can be divided into four categories: { y 1 , y 2 } , { y 3 } , { y 4 } , { y 5 } .
Let ( 0 , 0.4 , 0.29 ) < λ ( 0.71 , 0.29 , 0.17 ) . We have
( P 2 ) λ = ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 )
Then y i ( i = 1 , 2 , 3 , 4 , 5 ) can be divided into three categories: { y 1 , y 2 } , { y 3 , y 4 } , { y 5 } .
Let ( 0 , 1 , 0.4707 ) < λ ( 0.71 , 0.4 , 0.29 ) . We have
( P 2 ) λ = ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 )
Then y i ( i = 1 , 2 , 3 , 4 , 5 ) can be divided into two categories: { y 1 , y 2 , y 3 , y 4 } , { y 5 } .
Let ( 0 , 1 , 1 ) λ ( 0.71 , 0.4 , 0.4707 ) . We have
( P 2 ) λ = ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 ) ( 1 , 0 , 0 )
Then y i ( i = 1 , 2 , 3 , 4 , 5 ) can be divided into one category: { y 1 , y 2 , y 3 , y 4 , y 5 } .
When T ( a , b ) = m i n ( a , b ) , S ( a , b ) = m a x ( a , b ) , the clustering results are as Table 2.
According to the neutrosophic orthogonal clustering algorithm proposed in literature [28], the clustering results are as Table 3.
To compare ( T , S ) -SVNNEM clustering algorithm with intuitionistic fuzzy equivalent matrix (IFEM) clustering algorithm in literature [19], we assume the indeterminacy-membership function i y j ( x i ) is not considered in a SVNN of y j with respect to the attributes x i . Then this example reduces to the example in literature [19]. According to the method proposed in literature [19], the clustering results are as Table 4.
To compare with the fuzzy equivalent matrix (FEM) developed in literature [18], we only consider the membership degree of SVNN information. According to the method proposed on literature [18], the clustering results are as Table 5.
For convenient, we put results of five kinds of clustering algorithms into Table 6 for comparison.

5.2. Analysis of Comparative Results

From Table 6, we can see that the clustering results of five clustering methods are different. The main reason can be given by following comparative analysis.
(1)
For the ( T , S ) -SVNNEM clustering algorithm, when the dual triangular modules of type I and II are chosen, they can be divided into five classifications with the same classification ability but different classification results. The reason is the min and max operators (type I) easily overlook the influence of other SVNN information on the whole.
(2)
Compare our method with literatures [18,19], y i ( i = 1 , 2 , 3 , 4 , 5 ) can only be divided into three classifications by the FEM clustering algorithm in literature [18] and the IFEM clustering algorithm in literature [19]. The reason is that in the clustering process we use the SVNNM instead of the fuzzy matrix, which can better retain information. The classification results are more reasonable and comprehensive.
(3)
The method in literature [28] does not calculate the equivalence matrix based on the similarity matrix, but from the classification results, y i ( i = 1 , 2 , 3 , 4 , 5 ) can only be divided into four classifications by SVNN orthogonal clustering algorithm in literature [28], while y i ( i = 1 , 2 , 3 , 4 , 5 ) can be divided into five classifications by our clustering algorithm. For the example given in this paper, it shows that the ( T , S ) -SVNNEM clustering algorithm classification result is more accurate than SVNN orthogonal clustering algorithm in literature [28].
(4)
Compared our method with the existing methods, as the value of λ changes, the result remains stable. That is, λ keeps the classification result within a certain range. Such as, when y i ( i = 1 , 2 , 3 , 4 , 5 ) needs to be divided into three classifications, we have ( 0 , 0.4 , 0.29 ) < λ ( 0.71 , 0.29 , 0.17 ) . The classification results remain unchanged in this interval, while in literature [18,28] it cannot be divided into three classifications.

6. Conclusions

Currently, SVNS is a generalization of FS and IFS. It is more suitable for dealing with uncertainty, imprecise, incomplete and inconsistent information. In addition, clustering has attracted more and more attention. In this paper, the concepts of a ( T , S ) -based composition matrix and ( T , S ) -based single-valued neutrosophic number equivalence matrix have been developed. Further, a clustering algorithm has been developed. In order to illustrate the effectiveness and superiority of our method, a comparison example has been given. Finally, the comparative results of an example have been analyzed which shows the superiority of our method.

Author Contributions

J.M. conceived, wrote and revised this paper; H.-L.H. provided ideas and suggestions for the revision of this paper.

Funding

This research was funded by National Natural Science Foundation Project, grant number (11701089); Fujian Natural Science Foundation, grant number (2018J01422); Scientific Research Project of Minnan Normal University, grant number (MK201715).

Acknowledgments

This paper is supported by Institute of Meteorological Big Data-Digital Fujian and Fujian Key Laboratory of Data Science and Statistics.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–356. [Google Scholar] [CrossRef]
  2. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  3. Atanassov, K.; Gargov, G. Interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  4. Smarandache, F. A Unifying Field in Logics: Neutrosophy Logic; American Research Press: Rehoboth, DE, USA, 1999; pp. 1–141. [Google Scholar]
  5. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multisp. Multistruct. 2010, 4, 410–413. [Google Scholar]
  6. Ngan, R.T.; Ali, M.; Le, H.S. δ-equality of intuitionistic fuzzy sets: a new proximity measure and applications in medical diagnosis. Appl. Intell. 2017, 48, 1–27. [Google Scholar] [CrossRef]
  7. Luo, M.; Zhao, R. A distance measure between intuitionistic fuzzy sets and its application in medical diagnosis. Artif. Intell. Med. 2018, 89, 34–39. [Google Scholar] [CrossRef] [PubMed]
  8. Huang, H.L. New distance measure of single-valued neutrosophic sets and its application. Int. J. Intell. Syst. 2016, 31, 1021–1032. [Google Scholar] [CrossRef]
  9. Mo, J.M.; Huang, H.L. Dual generalized nonnegative normal neutrosophic Bonferroni mean operators and their application in multiple attribute decision making. Information 2018, 9, 201. [Google Scholar] [CrossRef]
  10. Ye, J. Intuitionistic fuzzy hybrid arithmetic and geometric aggregation operators for the decision-making of mechanical design schemes. Appl. Intell. 2017, 47, 1–9. [Google Scholar] [CrossRef]
  11. Broumi, S.; Smarandache, F. Several similarity measures of neutrosophic sets. Neutrosophic Sets Syst. 2013, 1, 54–62. [Google Scholar]
  12. Garg, H.; Rani, D. A robust correlation coefficient measure of complex intuitionistic fuzzy sets and their applications in decision-making. Appl. Intell. 2018. [Google Scholar] [CrossRef]
  13. Kumar, S.V.A.; Harish, B.S. A modified intuitionistic fuzzy clustering algorithm for medical image segmentation. J. Intell. Syst. 2017. [Google Scholar] [CrossRef]
  14. Li, Q.Y.; Ma, Y.C.; Smarandache, F.; Zhu, S.W. Single-valued neutrosophic clustering algorithm based on Tsallis entropy maximization. Axioms 2018, 7, 57. [Google Scholar] [CrossRef]
  15. Verma, H.; Agrawal, R.K.; Sharan, A. An improved intuitionistic fuzzy c-means clustering algorithm incorporating local information for brain image segmentation. Appl. Soft Comput. 2016, 46, 543–557. [Google Scholar] [CrossRef]
  16. Qureshi, M.N.; Ahamad, M.V. An improved method for image segmentation using k-means clustering with neutrosophic logic. Procedia Comput. Sci. 2018, 132, 534–540. [Google Scholar] [CrossRef]
  17. Seising, R. The emergence of fuzzy sets in the decade of the perceptron-lotfi A. Zadeh’s and Frank Rosenblatt’s research work on pattern classification. Mathematics 2018, 6, 110. [Google Scholar] [CrossRef]
  18. Chen, S.L.; Li, J.G.; Wang, X.G. Fuzzy Set Theory and Its Application; Science Press: Beijing, China, 2005; pp. 94–113. [Google Scholar]
  19. Zhang, H.M.; Xu, Z.S.; Chen, Q. Clustering approach to intuitionistic fuzzy sets. Control Decis. 2007, 22, 882–888. [Google Scholar]
  20. Guan, C.; Yuen, K.K.F.; Coenen, F. Towards an intuitionistic fuzzy agglomerative hierarchical clustering algorithm for music recommendation in folksonomy. In Proceedings of the IEEE International Conference on Systems, Kowloon, China, 9–12 October 2016; pp. 2039–2042. [Google Scholar]
  21. Kuo, R.J.; Lin, T.C.; Zulvia, F.E.; Tsai, C.Y. A hybrid metaheuristic and kernel intuitionistic fuzzy c-means algorithm for cluster analysis. Appl. Soft Comput. 2018, 67, 299–308. [Google Scholar] [CrossRef]
  22. Xu, Z.S.; Tang, J.; Liu, S. An orthogonal algorithm for clustering iIntuitionistic fuzzy information. Int. J. Inf. 2011, 14, 65–78. [Google Scholar]
  23. Zhao, H.; Xu, Z.S.; Liu, S.S.; Wang, Z. Intuitionistic fuzzy MST clustering algorithms. Comput. Ind. Eng. 2012, 62, 1130–1140. [Google Scholar] [CrossRef]
  24. Sahu, M.; Gupta, A.; Mehra, A. Hierarchical clustering of interval-valued intuitionistic fuzzy relations and its application to elicit criteria weights in MCDM problems. Opsearch 2017, 54, 1–29. [Google Scholar] [CrossRef]
  25. Zhang, H.W. C-means clustering algorithm based on interval-valued intuitionistic fuzzy sets. Comput. Appl. Softw. 2011, 28, 122–124. [Google Scholar]
  26. Ye, J. Single-valued neutrosophic clustering algorithms based on similarity measures. J. Classif. 2017, 34, 1–15. [Google Scholar] [CrossRef]
  27. Guo, Y.; Xia, R.; Şengür, A.; Polat, K. A novel image segmentation approach based on neutrosophic c-means clustering and indeterminacy filtering. Neural Comput. Appl. 2017, 28, 3009–3019. [Google Scholar] [CrossRef]
  28. Ali, M.; Le, H.S.; Khan, M.; Tung, N.T. Segmentation of dental x-ray images in medical imaging using neutrosophic orthogonal matrices. Expert Syst. Appl. 2018, 91, 434–441. [Google Scholar] [CrossRef]
  29. Ye, J. Single-valued neutrosophic minimum spanning tree and its clustering method. J. Intell. Syst. 2014, 23, 311–324. [Google Scholar] [CrossRef]
  30. Klir, G.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Application; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  31. Chen, D.G. Fuzzy Rough Set Theory and Method; Science Press: Beijing, China, 2013. [Google Scholar]
  32. Smarandache, F.; Vlădăreanu, L. Applications of neutrosophic logic to robotics: An introduction. In Proceedings of the IEEE International Conference on Granular Computing, Kaohsiung, Taiwan, 8–10 November 2012; pp. 607–612. [Google Scholar]
  33. Wang, W.Z.; Liu, X.W. Some operations over atanassov’s intuitionistic fuzzy sets based on Einstein t-norm and t-conorm. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2013, 21, 263–276. [Google Scholar] [CrossRef]
  34. Xia, M.M.; Xu, Z.S.; Zhu, B. Some issues on intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm. Knowl.-Based Syst. 2012, 31, 78–88. [Google Scholar] [CrossRef]
  35. Garg, H. Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm and t-conorm and their application to decision making. Comput. Ind. Eng. 2016, 101, 53–69. [Google Scholar] [CrossRef]
  36. Liu, P.D.; Chen, S.M. Heronian aggregation operators of intuitionistic fuzzy numbers based on the Archimedean t-norm and t-conorm. Int. Conf. Mach. Learn. Cybern. 2017. [Google Scholar] [CrossRef]
  37. Liu, P.D. The aggregation operators based on Archimedean t-conorm and t-norm for single-valued neutrosophic numbers and their application to decision making. Int. J. Fuzzy Syst. 2016, 18, 1–15. [Google Scholar] [CrossRef]
  38. Huang, H.L. (T,S)-based interval-valued intuitionistic fuzzy composition matrix and its application for clustering. Iran. J. Fuzzy Syst. 2012, 9, 7–19. [Google Scholar]
Table 1. The characteristics of the five new cars.
Table 1. The characteristics of the five new cars.
x 1 x 2 x 3 x 4 x 5 x 6
y 1 (0.3,0.4,0.5)(0.6,0.7,0.1)(0.4,0.4,0.3)(0.8,0.3,0.1)(0.1,0.2,0.6)(0.5,0.2,0.4)
y 2 (0.6,0.3,0.3)(0.5,0.1,0.2)(0.6,0.5,0.1)(0.7,0.3,0.1)(0.3,0.2,0.6)(0.4,0.4,0.3)
y 3 (0.4,0.2,0.4)(0.8,0.3,0.1)(0.5,0.1,0.1)(0.6,0.1,0.2)(0.4,0.3,0.5)(0.3,0.2,0.2)
y 4 (0.2,0.5,0.4)(0.4,0.3,0.1)(0.9,0.2,0.0)(0.8,0.2,0.1)(0.2,0.1,0.5)(0.7,0.3,0.1)
y 5 (0.5,0.1,0.2)(0.3,0.4,0.6)(0.6,0.3,0.3)(0.7,0.3,0.1)(0.6,0.4,0.2)(0.5,0.1,0.3)
Table 2. Clustering result of ( T , S ) -SVNNEM of type I.
Table 2. Clustering result of ( T , S ) -SVNNEM of type I.
λ Classification Results
( 0.89 , 0.33 , 0.26 ) < λ ( 1 , 0 , 0 ) { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 }
( 0.85 , 1 , 0.29 ) < λ ( 0.89 , 0.33 , 0.26 ) { y 2 , y 5 } , { y 1 } , { y 3 } , { y 4 }
( 0.83 , 1 , 1 ) < λ ( 0.85 , 0.33 , 0.29 ) { y 2 , y 3 , y 5 } , { y 1 } , { y 4 }
( 0.81 , 1 , 1 ) < λ ( 0.83 , 0.33 , 0.29 ) { y 1 , y 2 , y 3 , y 5 } , { y 4 }
( 0 , 1 , 1 ) λ ( 0.81 , 0.33 , 0.29 ) { y 1 , y 2 , y 3 , y 4 , y 5 }
Table 3. Clustering algorithm in literature [28].
Table 3. Clustering algorithm in literature [28].
λ Classification Results
( 1 , 0 , 0 ) { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 }
( 0.6 , 0.6 , 0.3 ) { y 1 , y 2 } , { y 3 } , { y 4 } , { y 5 }
( 0.5 , 0.8 , 0.4 ) { y 1 , y 2 , y 3 , y 4 } , { y 5 }
( 0.2 , 0.8 , 0.5 ) { y 1 , y 2 , y 3 , y 4 , y 5 }
Table 4. Clustering algorithm in literature [19].
Table 4. Clustering algorithm in literature [19].
λ Classification Results
0.78 < λ 1 { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 }
0.71 < λ 0.78 { y 1 , y 2 , y 3 } , { y 4 } , { y 5 }
0 λ 0.71 { y 1 , y 2 , y 3 , y 4 , y 5 }
Table 5. Clustering algorithm in literature [18].
Table 5. Clustering algorithm in literature [18].
λ Classification Results
0.7 < λ 1 { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 }
0.6 < λ 0.7 { y 1 , y 2 , y 3 , y 5 } , { y 4 }
0 λ 0.6 { y 1 , y 2 , y 3 , y 4 , y 5 }
Table 6. Clustering result of five kinds of clustering algorithms.
Table 6. Clustering result of five kinds of clustering algorithms.
Class ( T , S ) -SVNNEM Clustering Algorithms ( T , S ) -SVNNEM Clustering AlgorithmsSVNN Orthogonal Clustering [28]FEM Clustering Algorithms [18]IFEM Clustering Algorithms [19]
( T ( a , b ) = min ( a , b ) , S ( a , b ) = max ( a , b ) ) ( T ( a , b ) = ab , S ( a , b ) = a + b ab )
1 { y 1 , y 2 , y 3 , y 4 , y 5 } { y 1 , y 2 , y 3 , y 4 , y 5 } { y 1 , y 2 , y 3 , y 4 , y 5 } { y 1 , y 2 , y 3 , y 4 , y 5 } { y 1 , y 2 , y 3 , y 4 , y 5 }
2 { y 1 , y 2 , y 3 , y 5 } , { y 4 } { y 1 , y 2 , y 3 , y 4 } , { y 5 } { y 1 , y 2 , y 3 , y 4 } , { y 5 } { y 1 , y 2 , y 3 , y 5 } , { y 4 } failed
3 { y 2 , y 3 , y 5 } , { y 1 } , { y 4 } { y 1 , y 2 } , { y 3 , y 4 } , { y 5 } failedfailed { y 1 , y 2 , y 3 } , { y 4 } , { y 5 }
4 { y 2 , y 5 } , { y 1 } , { y 3 } , { y 4 } { y 1 , y 2 } , { y 3 } , { y 4 } , { y 5 } { y 1 , y 2 } , { y 3 } , { y 4 } , { y 5 } failedfailed
5 { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 } { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 } { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 } { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 } { y 1 } , { y 2 } , { y 3 } , { y 4 } , { y 5 }

Share and Cite

MDPI and ACS Style

Mo, J.; Huang, H.-L. (T, S)-Based Single-Valued Neutrosophic Number Equivalence Matrix and Clustering Method. Mathematics 2019, 7, 36. https://doi.org/10.3390/math7010036

AMA Style

Mo J, Huang H-L. (T, S)-Based Single-Valued Neutrosophic Number Equivalence Matrix and Clustering Method. Mathematics. 2019; 7(1):36. https://doi.org/10.3390/math7010036

Chicago/Turabian Style

Mo, Jiongmei, and Han-Liang Huang. 2019. "(T, S)-Based Single-Valued Neutrosophic Number Equivalence Matrix and Clustering Method" Mathematics 7, no. 1: 36. https://doi.org/10.3390/math7010036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop