1. Introduction and Preliminaries
Neutrosophic Logic is a neonate study area in which each proposition is estimated to have the proportion (percentage) of truth in a subset T, the proportion of indeterminacy in a subset I, and the proportion of falsity in a subset F. We utilize a subset of truth (or indeterminacy, or falsity), instead of a number only, since in many situations we do not have ability to strictly specify the proportions of truth and of falsity but only approximate them; for instance, a proposition is between 25% and 55% true and between 65% and 78% false; even worse: between 33% and 48% or 42 and 53% true (pursuant to several observer), and 58% or between 66% and 73% false. The subsets are not essential intervals, but any sets (open or closed or half open/half-closed intervals, discrete, continuous, intersections or unions of the previous sets, etc.) in keeping with the given proposition. Zadeh initiated the adventure of obtaining meaning and mathematical results from uncertainty situations (fuzzy) [
1]. Fuzzy sets brought a new dimension to the concept of classical set theory. Atanassov introduced intuitionistic fuzzy sets including membership and non-membership degrees [
2]. Neutrosophy was proposed by Smarandache as a computational approach to the concept of neutrality [
3]. Neutrosophic sets consider membership, non-membership and indeterminacy degrees. Intuitionistic fuzzy sets are defined by the degree of membership and non-membership and, uncertainty degrees by the 1-(membership degree plus non-membership degree), while the degree of uncertainty is evaluated independently of the degree of membership and non-membership in neutrosophic sets. Here, membership, non-membership, and degree of uncertainty (uncertainty), such as degrees of accuracy and falsity, can be evaluated according to the interpretation of the places to be used. It depends entirely on the subject area (the universe of discourse). This reveals a difference between neutrosophic set and intuitionistic fuzzy set. In this sense, the concept of neutrosophic is a possible solution and representation of problems in various fields. Two detailed and mathematical fundamental differences between relative truth (IFL) and absolute truth (NL) are:
- (i)
NL can discern absolute truth (truth in all possible worlds, according to Leibniz) from the relative truth (truth in at least one world) because NL (absolute truth) = 1^{+} while IFL (relative truth) = 1. This has practice in philosophy (see the Neutrosophy). The standard interval [0, 1] used in IFL has been extended to the unitary non-standard interval ]^{−} 0, 1^{+} [ in NL. Parallel earmarks for absolute or relative falsehood and absolute or relative indeterminacy are permitted in NL.
- (ii)
There is no limit on T, I, F other than they are subsets of ]^{−} 0, 1^{+} [, thus: ^{−}0 ≤ inf T + inf I + inf F ≤ sup T + sup I + sup F ≤ 3^{+} in NL. This permissiveness allows dialetheist, paraconsistent, and incomplete information to be described in NL, while these situations cannot be described in IFL since F (falsehood), T (truth), I (indeterminacy) are restricted either to t + i + f = 1 or to t^{2} + f^{2} ≤ 1, if T, I, F are all reduced to the points t, i, f respectively, or to sup T + sup I + sup F = 1 if T, I, F are subsets of [0, 1] in IFL.
Clustering data is one of the most significant problems in data analysis. Useful and efficient algorithms are needed for big data. This is even more challenging for neutrosophic data sets, particularly those involving uncertainty. These sets are elements of some decision-making problems, [
4,
5,
6,
7,
8]. Several distances and similarities are used for decision-making problems [
9,
10]. Algorithms for the clustering big data sets use the distances (metrics). There are some metrics used in algorithms to analysis neutrosophic data sets: Hamming, Euclidean, etc. In this paper, we examine clustering of neutrosophic data sets via neutrosophic valued distances.
The big data notion is a new label for the giant size of data–both structured and unstructured—that overflows several sectors on a time-to-time basis. It does not mean overall data are significant and the significant aspect is to obtain desired specific data interpretation. Big data can be analyzed for pre-cognition that make possible more consistent decisions and strategic having positions. Doug Laney [
11] sort to make the definition of big data the three Vs and Veracity widespread: (1) Velocity: This refers to dynamic data and captures data streams in near real-time. Data streams in at an exceptional speed and must be dealt with in a well-timed mode. (2) Variety: Data comes in all types of formats—from structured, numeric data in traditional databases to formless materials. On the one hand, variety denotes to the various sources and types of organized and formless data. Storing data is made from sources like worksheets and databases. (3) Volume: Organizations gather data from a range of sources, including social media, business operations, and data from the sensor or machine to machine. (4) Veracity: It mentions to the biases, noise, and anomaly in data. That corresponds with the question “Is the data that is being put in storage and extracted meaningful to the problem being examined?”.
In this paper, we also focus on K-sets cluster algorithm which is a process of analyzing data with the aim of evaluating neutrosophic big data sets. The K-sets cluster is an unrestrained type of learning that is used when one wants to utilize unlabeled data, [
12]. The goal of the algorithm is to find groups of data with the number of groups represented by variable K. The algorithm works iteratively to set-aside each data point obtained to one of the K groups based on the properties obtained. The data points are clustered according to feature similarity. Instead of identifying groups before examining patterns, clustering helps to find and analyze naturally occurring groups. “Choosing K” has the goal of “how the number of groups can be determined”. Each center of a congregation is a collection of property values describe the groups that emerged. Analysis of centroid feature weights can be used to qualitatively interpret what kind of group is represented by each cluster. The algorithm finds the clusters and data set labels for a particular pre-chosen K. To have the number of clusters in the data, the user must run the K-means clustering algorithm for a range of K values and compare the results. In general, there is no technique to determine a specific K value, but a precise estimate can be obtained using the following methods. In general, one of the metrics used to compare the results between the different K values as the average distance between the data points and their cluster synthesis. As the number of sets increases, it will always reduce the distance to the data points, while the K increment will always lower this metric as other criteria, and when K is the same as the number of data points, reaching zero will be excessive. Thus, this metric cannot be used as a single purpose. Rather, the average distance to the center as a function of K is plotted where the shear rate falls sharply, it can be used to determine K approximately.
A number of other techniques are available for verification of K, including cross-validation, information criteria, information theoretical jump method, and G-tools algorithm. In addition, monitoring the distribution of data points between groups provides information about how the algorithm splits data for each K. K-sets algorithms base on the measurement of distances of sets. A distance is a measurement of how far apart each pair of elements of a given set is. Distance functions in mathematics and many other computational sciences are important concepts. They have wide usage areas, for example, the goal of quantifying a dissimilarity (or equivalently similarity) between two objects, sets or set of sets in some sense. However, due to the massive, complicated and different type data sets today, definitions of distance functions are required to be more generalized and detailed. For this purpose, we define a novel metric for similarity and distance to give Neutrosophic Valued-Metric Spaces (NVGMS). We present relative weighted measure definition and finally K-sets algorithm after given the definition of NVGMS.
Some readers who are unfamiliar with the topic in this paper need to have a natural example to understand the topic well. There is a need for earlier data in everyday life to give a natural example for the subject first described in this paper. There is no this type of data (we mean neutrosophic big data) in any source, but we will give an example of how to obtain and cluster such a data in
Section 6 of the paper. If we encounter a sample of neutrosophic big data in the future, we will present the results with a visual sample as a technical report. In this paper, we have developed a mathematically powerful method for the notion of concepts that are still in its infancy.
1.1. $G$-Metric Spaces
Metric space is a pair of (
A,
d), where
A is a non-empty set and
d is a metric which is defined by a certain distance and the elements of the set
A. Some metrics may have different values such as a complex-valued metric [
13,
14]. Mustafa and Sims defined
G-metric by generalizing this definition [
15]. Specifically, fixed point theorems on analysis have been used in
G-metric spaces [
16,
17].
Definition 1. Let A be a non-empty set and d be a metric on A, then if the following conditions hold, the pair (A, d) is called a metric space. Let $x,y,z\in A$
- (1)
$d(x,y)\ge 0$, (non-negativity)
- (2)
$d(x,y)=0\iff x=y$, (identity)
- (3)
$d(x,y)=d(y,x)$, (symmetry)
- (4)
$d(x,z)\le d(x,y)+d(y,z)$(triangle inequality).
where $d:A\times A\to {R}^{+}\cup \left\{0\right\}$.
Definition 2. [15] Let A be a non-empty set. A function $G:A\times A\times A\to [0,+\infty )$ is called G-distance if it satisfies the following properties: - (1)
$G(x,y,z)=0$ if and only if $x=y=z$,
- (2)
$G(x,x,y)\ne 0$whenever$x\ne y$,
- (3)
$G(x,x,y)\le G(x,y,z)$ for any $x,y,z\in A$, with $z\ne y$,
- (4)
$G(x,y,z)=G(x,z,y)=\dots $ (symmetric for all elements),
- (5)
$G(x,y,z)\le G(x,a,a)+G(a,y,z)$ for all $a,x,y,z\in A$ (Rectangular inequality).
The pair (
A,
G) is called a
G-metric space. Moreover, if
G-metric has the following property then it is called symmetric:
$G(x,x,y)=G(x,y,y),\forall x,y\in A$.
Example 1. In 3-dimensional Euclidean metric space, one can assume the G-metric space $\left({E}^{3},G\right)$ as the following:where $x,y,z\in {E}^{3}$ and $\Vert .\times .\Vert $ represent the norm of the vector product of two vectors in ${E}^{3}$. It is obvious that it satisfies all conditions in the Definition 2 because of the norm has the metric properties, and it is symmetric. Example 2. Let (A, d) is a metric space. Thenis a G-metric, where $x,y,z\in A$. The fact that d is a metric indicates that it has triangle inequality. Thus, G is always positive definite. Proposition 1. [17] Let (A, G) be a G-metric space then a metric on A can be defined from a G-metric: 1.2. Neutrosophic Sets
Neutrosophy is a generalized form of the philosophy of intuitionistic fuzzy logic. In neutrosophic logic, there is no restriction for truth, indeterminacy, and falsity and they have a unit real interval value for each element neutrosophic set. These values are independent of each other. Sometimes, intuitionistic fuzzy logic is not enough for solving some real-life problems, i.e., engineering problems. So, mathematically, considering neutrosophic elements are becoming important for modelling these problems. Studies have been conducted in many areas of mathematics and other related sciences especially computer science since Smarandache made this philosophical definition, [
18,
19].
Definition 3. Let E be a universe of discourse and$A\subseteq E.$$A=\left\{\left(x,T(x),I(x),F(x)\right):x\in E\right\}$is a neutrosophic set or single valued neutrosophic set (SVNS), where${T}_{A},{I}_{A},{F}_{A}:A\to {]}^{-}0,{1}^{+}[$are the truth-membership function, the indeterminacy-membership function and the falsity-membership function, respectively. Here,${}^{-}0\le {T}_{A}(x)+{I}_{A}(x)+{F}_{A}(x)\le {3}^{+}$.
Definition 4. For the SVNS A in E, the triple$\langle {T}_{A},{I}_{A},{F}_{A}\rangle $is called the single valued neutrosophic number (SVNN).
Definition 5. Let$n=\langle {T}_{n},{I}_{n},{F}_{n}\rangle $be an SVNN, then the score function of$n$can be given as follow:where${s}_{n}\in \left[-1,1\right]$.
Definition 6. Let$n=\langle {T}_{n},{I}_{n},{F}_{n}\rangle $be an SVNN, then the accuracy function of n can be given as follow:where${h}_{n}\in \left[0,1\right]$.
Definition 7. Let${n}_{1}$and${n}_{2}$be two SVNNs. Then, the ranking of two SVNNs can be defined as follows:
- (I)
If${s}_{{n}_{1}}>{s}_{{n}_{2}},$then${n}_{1}>{n}_{2}$;
- (II)
If${s}_{{n}_{1}}={s}_{{n}_{2}}\text{}\mathrm{and}\text{}{h}_{{n}_{1}}\ge {h}_{{n}_{2}},$then${n}_{1}\ge {n}_{2}$.
2. Neutrosophic Valued Metric Spaces
The distance is measured via some operators which are defined in some non-empty sets. In general, operators in metric spaces have zero values, depending on the set and value.
2.1. Operators
Definition 8. [
20,
21],
Let $A$ be non-empty SVNS and $x=\langle {T}_{x},{I}_{x},{F}_{x}\rangle ,y=\langle {T}_{y},{I}_{y},{F}_{y}\rangle $ be two SVNNs. The operations that addition, multiplication, multiplication with scalar $\alpha \in {\mathbb{R}}^{+}$, and exponential of SVNNs are defined as follows, respectively: From this definition, we have the following theorems as a result:
Theorem 1. Let$x=\langle {T}_{x},{I}_{x},{F}_{x}\rangle $be an SVNN. The neutral element of the additive operator of the set$A$is${0}_{A}=\langle 0,1,1\rangle $.
Proof. Let
$x=\langle {T}_{x},{I}_{x},{F}_{x}\rangle $ and
${0}_{A}=\langle {T}_{0},{I}_{0},{F}_{0}\rangle $ are two SVNN and using Definition 8 we have
(There is no need to show left-hand side because the operator is commutative in every component). □
To compare the neutrosophic values based on a neutral element, we shall calculate the score and accuracy functions of a neutral element
${0}_{A}=\langle 0,1,1\rangle $, respectively:
Theorem 2. Let$x=\langle {T}_{x},{I}_{x},{F}_{x}\rangle $be an SVNN. The neutral element of the multiplication operator of the$A$is${1}_{A}=\langle 1,0,0\rangle $.
Proof. Let
$x=\langle {T}_{x},{I}_{x},{F}_{x}\rangle $ and
${1}_{A}=\langle {T}_{1},{I}_{1},{F}_{1}\rangle $ are two SVNN and using Definition 8 we have
In addition, score and accuracy functions of the neutral element ${1}_{A}=\langle 1,0,0\rangle $ are ${s}_{1}=\frac{1+{T}_{1}-2{I}_{1}-{F}_{1}}{2}=1$ and ${h}_{1}=\frac{2+{T}_{1}-{I}_{1}-{F}_{1}}{3}=1$, respectively. □
2.2. Neutrosophic Valued Metric Spaces
In this section, we consider the metric and generalized metric spaces in the neutrosophic meaning.
Definition 9. Ordering in the Definition 6 gives an order relation for elements of the conglomerate SVNN. Suppose that the mapping$d:X\times X\to A,$where$X$and$A$are SVNS, satisfies:
- (I)
${0}_{A}\le d(x,y)$and$d(x,y)={0}_{A}\iff {s}_{x}={s}_{y}\text{}\mathrm{and}\text{}{h}_{x}={h}_{y}$for all$x,y\in X$.
- (II)
$d(x,y)=d(y,x)$for all$x,y\in X$.
Then d is called a neutrosophic valued metric on $X$, and the pair $(X,d)$ is called neutrosophic valued metric space. Here, the third condition (triangular inequality) of the metric spaces is not suitable for SVNS because the addition is not ordinary addition.
Theorem 3. Let$(X,d)$be a neutrosophic valued metric space. Then, there are relationships among truth, indeterminacy and falsity values:
- (I)
$0<T(x,y)-2I(x,y)-F(x,y)+3$and if${s}_{o}={s}_{d}\text{}\mathrm{then}\text{}0T(x,y)-I(x,y)-F(x,y)+2$.
- (II)
If$d(x,y)={0}_{A}\iff T(x,y)=0,I(x,y)=F(x,y)=1.$
- (III)
$T(x,y)=T(y,x)$,$I(x,y)=I(y,x)$,$F(x,y)=F(y,x)$so, each distance function must be symmetric.
where$T(.,.)$,$I(.,.)$and$F(.,.)$are distances within themselves of the truth, indeterminacy and falsity functions, respectively.
Proof. - (I)
$\begin{array}{ll}{0}_{A}<d(x,y)& \iff \langle 0,1,1\rangle <\langle T(x,y),I(x,y),F(x,y)\rangle \\ & \iff {s}_{0}<{s}_{d}\iff -1<\frac{1+T(x,y)-2I(x,y)-F(x,y)}{2}\\ & \iff 0<T(x,y)-2I(x,y)-F(x,y)+3\end{array}$
- (II)
$\begin{array}{ll}d(x,y)=d(y,x)& \iff \langle T(x,y),I(x,y),F(x,y)\rangle =\langle T(y,x),I(y,x),F(y,x)\rangle \\ & \iff T(x,y)=T(y,x),I(x,y)=I(y,x),F(x,y)=F(y,x)\end{array}$□
Example 3. Let$A$be non-empty SVNS and$x=\langle {T}_{x},{I}_{x},{F}_{x}\rangle ,y=\langle {T}_{y},{I}_{y},{F}_{y}\rangle $be two SVNNs. If we define the metric$d:X\times X\to A,$as:then - (I)
$\begin{array}{l}0\text{}\text{}\left|{T}_{x}-{T}_{y}\right|-2\left(1-\left|{I}_{x}-{I}_{y}\right|\right)-\left(1-\left|{F}_{x}-{F}_{y}\right|\right)+3\\ \Rightarrow 0\left|{T}_{x}-{T}_{y}\right|+2\left|{I}_{x}-{I}_{y}\right|+\left|{F}_{x}-{F}_{y}\right|\end{array}$
Then it satisfies the first condition.
- (II)
Since the properties of the absolute value function, this condition is obvious.
So,$(X,d)$is a neutrosophic-valued metric space.
3. Neutrosophic Valued $G$-Metric Spaces
Definition 10. Let X and A be a non-empty SVNS. A function$G:X\times X\times X\to A$is called neutrosophic valued$G$-metric if it satisfies the following properties:
- (1)
$G(x,y,z)={0}_{A}$if and only if$x=y=z$,
- (2)
$G(x,x,y)\ne {0}_{A}$whenever$x\ne y$,
- (3)
$G(x,x,y)\le G(x,y,z)$for any$x,y,z\in X$, with$z\ne y$,
- (4)
$G(x,y,z)=G(x,z,y)=\dots $(symmetric for all elements).
The pair (X, G) is called a neutrosophic valued G-metric space.
Theorem 4. Let (X, G) be a neutrosophic valued G-metric space then, it satisfies followings:
- (1)
$T(x,x,x)=0,I(x,x,x)=F(x,x,x)=1.$
- (2)
Assume$x\ne y$, then$T(x,y,z)\ne 0,I(x,y,z)\ne 1,F(x,y,z)\ne 1.$
- (3)
$0<T(x,y,z)-T(x,x,y)+2\left(I(x,x,y)-I(x,y,z)\right)+F(x,x,y)-F(x,y,z)$
- (4)
$T(x,y,z),I(x,y,z)\text{}\mathrm{and}\text{}F(x,y,z)$are symmetric for all elements.
where$T(.,.,.)$,$I(.,.,.)$and$F(.,.,.)$are G-distance functions of truth, indeterminacy and falsity values of the element of the set, respectively.
Proofs are made in a similar way to neutrosophic valued metric spaces.
Example 4. Let$X$be non-empty SVNS and the G-distance function defined by:where$d(.,.)$is a neutrosophic valued metric. The pair (X, G) is obviously a neutrosophic valued G-metric space because of$d(.,.)$. Further, it has commutative properties. 4. Relative Weighted Neutrosophic Valued Distances and Cohesion Measures
The relative distance measure is a method used for clustering of data sets, []. We define the relative weighted distance, which is a more sensitive method for big data sets.
Let
${x}_{i}=\langle {T}_{{x}_{i}},{F}_{{x}_{i}},{I}_{{x}_{i}}\rangle \in A(\mathrm{non-empty\; SVNS}),i=0\dots n$ be SVNNs. Then neutrosophic weighted average operator of these SVNNs is defined as:
where
${\chi}_{i}$ is weighted for the
i th data. For a given a neutrosophic data set
$W=\left\{{w}_{1},{w}_{2},{w}_{3},\dots ,{w}_{n}\right\}$ and a neutrosophic valued metric
d, we define a relative neutrosophic valued distance for choosing another reference neutrosophic data and compute the relative neutrosophic valued distance as the average of the difference of distances for all the neutrosophic data
${w}_{i}\in W$.
Definition 11. The relative neutrosophic valued distance from a neutrosophic data${w}_{i}$to another neutrosophic data${w}_{j}$is defined as follows: Here, since T, I, F values of SVNNs cannot be negative, we can define the expression$d\left({w}_{i},{w}_{j}\right)\overline{)\circ}d\left({w}_{i},{w}_{k}\right)$as the distance between these two neutrosophic-valued metrics. Furthermore, the distance of metrics is again neutrosophic-valued here so, a related neutrosophic-valued distance can be defined as: The difference operator$\overline{)\circ}$generally is not a neutrosophic-valued metric (or G-metric). We used some abbreviations for saving space.
$$\begin{array}{ll}RD\left({w}_{i}\Vert {w}_{j}\right)& =\frac{1}{n}{\displaystyle \sum _{{w}_{k}\in W}\left(d\left({w}_{i},{w}_{j}\right)\overline{)\circ}d\left({w}_{i},{w}_{k}\right)\right)}\\ & =d\left({w}_{i},{w}_{j}\right)\overline{)\circ}\frac{1}{n}{\displaystyle \sum _{{w}_{k}\in W}d\left({w}_{i},{w}_{k}\right)}\\ & =\langle T({w}_{i},{w}_{j}),I({w}_{i},{w}_{j}),F({w}_{i},{w}_{j})\rangle \overline{)\circ}\frac{1}{n}\left(d\left({w}_{i},{w}_{1}\right)\oplus d\left({w}_{i},{w}_{2}\right)\oplus \dots \oplus d\left({w}_{i},{w}_{n}\right)\right)\\ & =\langle T({w}_{i},{w}_{j}),I({w}_{i},{w}_{j}),F({w}_{i},{w}_{j})\rangle \\ \\ & \overline{)\circ}\frac{1}{n}\left[\langle T({w}_{i},{w}_{1}),I({w}_{i},{w}_{1}),F({w}_{i},{w}_{1})\rangle \oplus \dots \oplus \langle T({w}_{i},{w}_{1}),I({w}_{i},{w}_{1}),F({w}_{i},{w}_{1})\rangle \right]\\ & =\langle T({w}_{i},{w}_{j}),I({w}_{i},{w}_{j}),F({w}_{i},{w}_{j})\rangle \\ & \overline{)\circ}\frac{1}{n}\left[\langle {\displaystyle \sum _{k\in W}T({w}_{i},{w}_{k})}-{\displaystyle \prod _{k\in W}T({w}_{i},{w}_{k})},{\displaystyle \prod _{k\in W}I({w}_{i},{w}_{k})},{\displaystyle \prod _{k\in W}F({w}_{i},{w}_{k})}\rangle \right]\\ \\ & =\langle T({w}_{i},{w}_{j}),I({w}_{i},{w}_{j}),F({w}_{i},{w}_{j})\rangle \\ & \overline{)\circ}\langle 1-{\left[1-{\displaystyle \sum _{k\in W}T({w}_{i},{w}_{k})}+{\displaystyle \prod _{k\in W}T({w}_{i},{w}_{k})}\right]}^{1/n},{\displaystyle \prod _{k\in W}I{({w}_{i},{w}_{k})}^{1/n}},{\displaystyle \prod _{k\in W}F{({w}_{i},{w}_{k})}^{1/n}}\rangle \\ & =\langle {T}_{1},{I}_{1},{F}_{1}\rangle \overline{)\circ}\langle {T}_{2},{I}_{2},{F}_{2}\rangle \\ & =\langle 1-\left|{T}_{1}-{\left({T}_{2}-1\right)}^{2}\right|,1-\left|{I}_{1}-{I}_{2}{}^{2}\right|,1-\left|{F}_{1}-{F}_{2}{}^{2}\right|\rangle \end{array}$$
where${T}_{1}$,${I}_{1}$,${F}_{1}$and${T}_{2}$,${I}_{2}$,${F}_{2}$are the first, second, and third elements of SVNN in the previous equation, respectively. Definition 12. The relative weighted neutrosophic valued distance from a neutrosophic data${w}_{i}$to another neutrosophic data${w}_{j}$is defined as follows:
$$\begin{array}{ll}R{D}_{\chi}\left({w}_{i}\Vert {w}_{j}\right)& ={\displaystyle \underset{i\ne j,j\ne k,i\ne k}{\sum _{{w}_{k}\in W}}{\chi}_{w}\left(d\left({w}_{i},{w}_{j}\right)\overline{)\circ}d\left({w}_{i},{w}_{k}\right)\right)}\\ & ={\chi}_{ij}d\left({w}_{i},{w}_{j}\right)\overline{)\circ}{\displaystyle \underset{i\ne j,j\ne k,i\ne k}{\sum _{{w}_{k}\in W}}{\chi}_{ik}d\left({w}_{i},{w}_{k}\right)}\\ & ={\chi}_{ij}\langle T({w}_{i},{w}_{j}),I({w}_{i},{w}_{j}),F({w}_{i},{w}_{j})\rangle \\ & \overline{)\circ}\left({\chi}_{i1}\langle T({w}_{i},{w}_{1}),I({w}_{i},{w}_{1}),F({w}_{i},{w}_{1})\rangle \oplus \dots \oplus {\chi}_{in}\langle T({w}_{i},{w}_{n}),I({w}_{i},{w}_{n}),F({w}_{i},{w}_{n})\rangle \right)\\ & =\langle 1-{\left(1-T({w}_{i},{w}_{j})\right)}^{{\chi}_{ij}},I{({w}_{i},{w}_{j})}^{{\chi}_{ij}},F{({w}_{i},{w}_{j})}^{{\chi}_{ij}}\rangle \\ & \overline{)\circ}\left(\begin{array}{l}\langle 1-{\left(1-T({w}_{i},{w}_{1})\right)}^{{\chi}_{i1}},I{({w}_{i},{w}_{1})}^{{\chi}_{i1}},F{({w}_{i},{w}_{1})}^{{\chi}_{i1}}\rangle \oplus \dots \\ \oplus \langle 1-{\left(1-T({w}_{i},{w}_{n})\right)}^{{\chi}_{in}},I{({w}_{i},{w}_{n})}^{{\chi}_{in}},F{({w}_{i},{w}_{n})}^{{\chi}_{in}}\rangle \end{array}\right)\\ & =\langle 1-{\left(1-T({w}_{i},{w}_{j})\right)}^{{\chi}_{ij}},I{({w}_{i},{w}_{j})}^{{\chi}_{ij}},F{({w}_{i},{w}_{j})}^{{\chi}_{ij}}\rangle \\ & \overline{)\circ}\langle {\displaystyle \sum _{\underset{k\ne i,j}{k=1}}^{n}{\stackrel{\sim}{T}}_{ik}-{\displaystyle \prod _{\underset{k\ne i,j}{k=1}}^{n}{\stackrel{\sim}{T}}_{ik}}},{\displaystyle \prod _{\underset{k\ne i,j}{k=1}}^{n}{\stackrel{\sim}{I}}_{ik}},{\displaystyle \prod _{\underset{k\ne i,j}{k=1}}^{n}{\stackrel{\sim}{F}}_{ik}}\rangle \\ & =\langle {T}_{1},{I}_{1},{F}_{1}\rangle \overline{)\circ}\langle {T}_{2},{I}_{2},{F}_{2}\rangle \\ & =\langle 1-\left|{T}_{1}-{\left({T}_{2}-1\right)}^{2}\right|,1-\left|{I}_{1}-{I}_{2}{}^{2}\right|,1-\left|{F}_{1}-{F}_{2}{}^{2}\right|\rangle \end{array}$$
where${\stackrel{\sim}{T}}_{ik}=1-{\left(1-T({w}_{i},{w}_{k})\right)}^{{\chi}_{ik}},\text{}{\stackrel{\sim}{I}}_{ik}=I{({w}_{i},{w}_{k})}^{{\chi}_{ik}},\text{}{\stackrel{\sim}{F}}_{ik}=F{({w}_{i},{w}_{k})}^{{\chi}_{ik}}$.
Definition 13. The relative weighted neutrosophic valued distance (from a random neutrosophic data${w}_{i}$) to a neutrosophic data${w}_{j}$is defined as follows: Definition 14. The relative weighted neutrosophic valued distance from a neutrosophic data set${W}_{1}$to another neutrosophic data set${W}_{2}$is defined as follows: Definition 15. (Weighted cohesion measure between two neutrosophic data) The difference of the relative weighted neutrosophic-valued distance to${w}_{j}$and the relative weighted neutrosophic-valued distance from${w}_{i}$to${w}_{j}$, i.e.,is called the weighted neutrosophic-valued cohesion measure between two neutrosophic data${w}_{i}$and${w}_{j}$. If${\rho}_{\chi}({w}_{i},{w}_{j})\ge {0}_{W}\left(\mathrm{resp.}\text{}{\rho}_{\chi}({w}_{i},{w}_{j})\le {0}_{W}\right)$then${w}_{i}$and${w}_{j}$are said to be cohesive (resp. incohesive). So, the relative weighted neutrosophic distance from${w}_{i}$and${w}_{j}$is not larger than the relative weighted neutrosophic distance (from a random neutrosophic data) to${w}_{j}$.
Definition 16. (Weighted cohesion measure between two neutrosophic data sets) Let${w}_{i}$and${w}_{j}$are elements of the neutrosophic data sets U and V, respectively. Then the measureis called weighted cohesion neutrosophic-valued measure of the neutrosophic data sets U and V. Definition 17. (Cluster) The non-empty neutrosophic data set W is called a cluster if it is cohesive, i.e.,$\rho (W,W)\ge {0}_{W}$.
6. Application and Example
We will give an example of the definition of the data that could have this kind of data and fall into the frame to fit this definition. We can call a data set a big data set if it is difficult and/or voluminous to define, analyze and visualize a data set. We give a big neutrosophic data example in accordance with this definition and possible use of G-metric, but it is fictional since there is no real neutrosophic big data example yet. It is a candidate for a good example that one of the current topics, image processing for big data analysis. Imagine a camera on a circuit board that is able to distinguish colors, cluster all the tools it can capture in the image and record that data. The camera that can be used for any color (for example white color vehicle) assigns the following degrees:
- (I)
The vehicle is at a certain distance at which the color can be detected, and the truth value of the portion of the vehicle is determined.
- (II)
The rate at which the vehicle can be detected by the camera is assigned as the uncertainty value (the mixed color is the external factors such as the effect of daylight and the color is determined on a different scale).
- (III)
The rate of not seeing a large part of the vehicle or the rate of out of range of the color is assigned as the value of falsity.
Thus, data of the camera is clustering via G-metric. This result gives that the numbers according to the daily quantities and colors of vehicles passing by are determined. The data will change continuously as long as the road is open, and the camera records the data. There will be a neutrosophic data for each vehicle. So, a Big Neutrosophic Data Clustering will occur.
Here, the weight functions we have defined for the metric can be given 1 value for the main colors (red-yellow-blue). For other secondary or mixed colors, the color may be given a proportional value depending on which color is closer.
A Numerical Toy Example
Take 5 neutrosophic data with their weights are equal to 1 to make a numerical example:
K = 3 disjoint sets can be chosen ${U}_{1}=\left\{{w}_{1},{w}_{4},{w}_{5}\right\},{U}_{2}=\left\{{w}_{2},{w}_{3}\right\}$.
Then
$$d({w}_{i},{w}_{j})=\left[\begin{array}{cccc}\langle 0,1,1\rangle & \langle 0.2,0.8,0.9\rangle & \langle 0.1,0.8,0.9\rangle & \begin{array}{cc}\langle 0.3,0.9,1.0\rangle & \langle 0.5,0.6,0.9\rangle \end{array}\\ \langle 0.2,0.8,0.9\rangle & \langle 0,1,1\rangle & \langle 0.3,0.6,0.8\rangle & \begin{array}{cc}\langle 0.1,0.9,0.9\rangle & \langle 0.7,0.8,0.8\rangle \end{array}\\ \langle 0.1,0.8,0.9\rangle & \langle 0.3,0.6,0.8\rangle & \langle 0,1,1\rangle & \begin{array}{cc}\langle 0.4,0.7,0.9\rangle & \langle 0.4,0.4,1.0\rangle \end{array}\\ \begin{array}{c}\langle 0.3,0.9,1.0\rangle \\ \langle 0.5,0.6,0.9\rangle \end{array}& \begin{array}{c}\langle 0.1,0.9,0.9\rangle \\ \langle 0.7,0.8,0.8\rangle \end{array}& \begin{array}{c}\langle 0.4,0.7,0.9\rangle \\ \langle 0.4,0.4,1.0\rangle \end{array}& \begin{array}{cc}\begin{array}{c}\langle 0,1,1\rangle \\ \langle 0.2,0.8,0.9\rangle \end{array}& \begin{array}{c}\langle 0.2,0.8,0.9\rangle \\ \langle 0,1,1\rangle \end{array}\end{array}\end{array}\right]$$
where we assume the
$d({w}_{i},{w}_{j})$ as in Example 3. So, we can compute the
G-metrics of the data as in Equation (3):
$$\begin{array}{l}G({w}_{1},{U}_{1})=G({w}_{1},{w}_{4},{w}_{5})=\langle 0.99,0.90,0.91\rangle \\ G({w}_{1},{U}_{2})=G({w}_{1},{w}_{2},{w}_{3})=\langle 0.79,0.72,0.83\rangle \\ G({w}_{2},{U}_{1})=G({w}_{2},{w}_{1},{w}_{4})\oplus G({w}_{2},{w}_{1},{w}_{5})\oplus G({w}_{2},{w}_{4},{w}_{5})=\langle 0.9874,0.6027,0.6707\rangle \\ G({w}_{2},{U}_{2})=G({w}_{2},{w}_{2},{w}_{3})=\langle 0,1,1\rangle \\ G({w}_{3},{U}_{1})=G({w}_{3},{w}_{1},{w}_{4})\oplus G({w}_{3},{w}_{1},{w}_{5})\oplus G({w}_{3},{w}_{4},{w}_{5})=\langle 1,0.4608,0.6707\rangle \\ G({w}_{3},{U}_{2})=G({w}_{3},{w}_{2},{w}_{3})=\langle 0,1,1\rangle \\ G({w}_{4},{U}_{1})=G({w}_{4},{w}_{1},{w}_{5})=\langle 0.81,0.64,0.91\rangle \\ G({w}_{4},{U}_{2})=G({w}_{4},{w}_{2},{w}_{3})=\langle 0.97,0.73,0.83\rangle \end{array}$$
So, according to the calculations above, ${w}_{4}$ belongs to set ${U}_{1}$ and the other data belong to ${U}_{2}$. Here, we have made the data belonging to the clusters according to the fact that the truth values of the G-metrics are mainly low. If the truth value of G-distance is low, then the data is closer to the set.