1. Introduction and Related Work
Social Network Analysis (SNA) is described as the study and understanding of the relationships between two or more items. As one of the hottest topics of SNA, the Community Detection Problem (CDP) has become a problem of great interest in modern statistics with applications in several fields [
1,
2,
3].
Most of the algorithms and definitions of community detection problems assume that the only information available for identifying the clusters/communities in a network is the graph which describes its structure. This graph can be nondirected and binary (all the relations in the network are equals) in the classical and most studied community detection problem [
2]; nondirected and valued (the network is modeled by a weighted nondirected graph), or the case in which the relations are not symmetrical [
1,
4,
5]. There are other interesting approaches focused on the incorporation of additional information to the crisp graphs, specifically to find communities in a network [
6,
7]. Nevertheless, any of these approaches only considers the community detection problem from a topological point of view, with a focus on the problem from the relations between nodes, but not considering other types of information that could be relevant in order to find communities in a real problem.
We illustrate our idea with an example. Let us present a situation in which we have a set V of nodes which represents the members of a parliament, whose friendly relations are known to us by the crisp graph $G=(V,E)$. Let us assume that the reason why they are interacting is because they are voting on a specific law in parliament. This information (the voting problem) and also their political preference on the law (or their capacity in the voting problem) could be relevant information to identify the clusters in the network.
To deal with this type of problem, in [
8,
9,
10,
11,
12], the authors introduce a new element to the community detection problem: a capacity measure that tries to model and reflect the reason why the nodes are interacting in the network in addition to the interests of the nodes to remain united. From this perspective, in [
11], we present an efficient algorithm for a community detection problem that deals with networks and fuzzy measures in that sense. Furthermore, in [
13], we present a constructive method to build a 1additive fuzzy measure from a crisp valuation of the nodes in the network.
Nevertheless and due to the natural uncertainty in real problems, the information associated with the network nodes is not usually assumed to be crisp in a natural way. Uncertainty is associated with the lack of knowledge about the occurrence of some event. Within the last decades, two important models are proposed to represent different types of uncertainty: randomness and vagueness/imprecision. Whereas the randomness emerges due to the lack of knowledge about the occurrence of some event [
14], vagueness a phenomenon rises when trying to group together objects that share a certain property. A typical vague property is “to be a small number” or “to be a tall person”, or (taking the previous example of the voting system in a parliament) “to be against a specific law”. In this way, the fuzzy linguistic approach has been successfully applied to many problems [
15]. Taking into account this type of information, an important goal of this work is to provide a methodology to face community detection problems in networks with additional soft information. With the aim to extend some of the definitions and algorithms presented in [
13] for crisp information, in this paper, we work on the basis of the existence of a vector (or a family of them) whose elements are no longer crisp values, but they are fuzzy sets that provide some type of soft information related to the individuals in the problem. In this context, another important objective of this work arises: we characterize a new representation tool which generalizes other existing models in the literature, regarding the nature of the information: the extended fuzzy graph based on a fuzzy vector (EFVFG). It is defined on the basis of a crisp networks and a vector of fuzzy sets. Another goal is to extend it to a more complex scenario in which there is not only one type of information but many; in this situation, we strongly recommend consideration of the multidimensional extended fuzzy graph fuzzy vectorbased (MEFVFG), which is defined on the basis of a family of vectors of fuzzy sets.
Then, we suggest a specific application of the new representation model, which is useful to obtain realistic partitions in a network with additional soft information. We present a competitive algorithm which introduces fuzzy sets to the process of grouping individuals. It is a modification of the wellknown Louvain algorithm for crisp networks [
16] that allows us to deal with soft information in the network, which is developed on the basis of MEFVFG. To guarantee the quality of the proposed methodology, we dedicate an important part of this work to its evaluation. The computational results showed in this work, obtained through a benchmarking process developed on the basis of some trapezoidal fuzzy sets, allow us to assert the good performance of our algorithm.
This paper is organized as follows. In
Section 2, we lay the foundations of the work, showing several concepts and definitions that are useful for the understanding and followup of the work. In
Section 3, we characterize a new model representation based on soft information about the individuals of a network given by several fuzzy sets. After that, in
Section 4, we propose a specific application of that new tool, related to the community detection problem with additional soft information, which is a very live issue in the field of SNA. In order to evaluate the performance of the proposed methodology, we show some computational results in
Section 5. The paper ends in
Section 6 with some conclusions and a final discussion.
3. Model Definition: Building Extended Fuzzy Graphs from Graphs with Fuzzy Nodes Information
In this section, we work on the definition of a new representation tool. Firstly, we do this in a unidimensional scenario, assuming there is an additional fuzzy information vector related to the individuals of a set
V, denoted by
$\tilde{f}=\left(\tilde{{f}_{1}},\cdots ,\tilde{{f}_{r}}\right)$. For each
$i\in V$, the fuzzy set
$\tilde{{f}_{i}}$ (characterized by its membership function
${\eta}_{{f}_{i}}$) represents the vague or imprecise information associated to the node
i of some characteristic or evidence. This fuzzy modelization is especially useful (but not only) when the information associated with each node is gathered (for example) by a linguistic term. Specifically, in this case, we could work with linguistic terms
$\tilde{{f}_{i}}\in \tilde{L}$. By analogy with [
9], we first propose a characterization of a fuzzy Sugeno
$\lambda $measure from this fuzzy vector
$\tilde{f}$. This measure is denoted by
${\mu}_{f,p}$.
Definition 5 (Fuzzy Sugeno
$\lambda $measure obtained from fuzzy sets).
Given the set $V=\{1,2,\cdots ,n\}$, let $\tilde{f}=\left(\tilde{{f}_{1}},\cdots ,\tilde{{f}_{n}}\right)$ denote a vector of fuzzy sets defined over a universe $U\subset {\mathbb{R}}^{+}$ (i.e., ${\eta}_{{f}_{i}}:U\u27f6[0,1]$), and let $D:F\left({\mathbb{R}}^{+}\right)\to {\mathbb{R}}^{+}$ denote a defuzzification operator. Then, for any $p\in (0,1]$ and $i\in V$, a natural definition of is ${\mu}_{f,p}$ is:
where
${\mu}_{f,p}(M\cup N)={\mu}_{f,p}\left(M\right)+{\mu}_{f,p}\left(N\right)+\lambda {\mu}_{f,p}\left(M\right){\mu}_{f,p}\left(N\right),\phantom{\rule{4pt}{0ex}}\forall M,N\subseteq V$, with
$M\cap N=\xd8\mathrm{and}\lambda +1=\prod _{i=1}^{n}(1+\lambda {\mu}_{f,p}\left(i\right)),beingp\in (0,1]$. Note that the interpretation of ${\mu}_{f,p}$ depends on p. Specifically,
Proposition 1. Given the parameter $p=1$, the function ${\mu}_{f,p}$ is a fuzzy Sugeno λmeasure 1additive.
Proof. Because of the properties of the Sugeno
$\lambda $measures [
29] in addition to the assumption of
$p=1$, we have
$\lambda =0$. Then,
$\forall M\subseteq V$,
${\mu}_{f,p}\left(M\right)=\frac{{\sum}_{\ell \in M}D\left(\tilde{{f}_{\ell}}\right)}{{\sum}_{k=1}^{n}D\left(\tilde{{f}_{k}}\right)}$, so
${\mu}_{f,p}$ meets the conditions of fuzzy measures [
30], Sugeno
$\lambda $measures [
29] and 1additivity [
31].
${\mu}_{f,p}(\xd8)=0$ Trivial.
${\mu}_{f,p}\left(V\right)=\frac{D(\tilde{{f}_{1}})}{{\sum}_{k=1}^{n}D(\tilde{{f}_{k}})}+\cdots +\frac{D(\tilde{{f}_{n}})}{{\sum}_{k=1}^{n}D(\tilde{{f}_{k}})}=1$.
Let $M\subseteq N\subseteq V$. Then, ${\mu}_{f,p}\left(N\right)=\frac{{\sum}_{\ell \in N}D(\tilde{{f}_{\ell}})}{{\sum}_{k=1}^{n}D(\tilde{{f}_{k}})}=$ $\frac{{\sum}_{\ell \in M}D(\tilde{{f}_{\ell}})+{\sum}_{t\in N\setminus M}D(\tilde{{f}_{t}})}{{\sum}_{k=1}^{n}D(\tilde{{f}_{k}})}\ge \frac{{\sum}_{\ell \in M}D(\tilde{{f}_{\ell}})}{{\sum}_{k=1}^{n}D(\tilde{{f}_{k}})}={\mu}_{f,p}(M)$, so ${\mu}_{f,p}$ is a fuzzy measure.
Sugeno $\lambda $measure. Trivial by definition.
1additivity: regarding [
31], it is trivial if
$\forall i\in \{1,\cdots ,n\}$, we define
${a}_{i}={\mu}_{f,p}(i)$.
□
Proposition 2. Given the parameter $p\in (0,1)$, ${\mu}_{f,p}$ is a fuzzy Sugeno λmeasure.
Proof. The proof is similar to that of Proposition 1. □
So, we generalize the notion of
extended fuzzy graph vector based,
$\tilde{G}=\left(V,E,{\mu}_{x,p}\right)$[
9] to a scenario where the additional information is not provided by a crisp vector
x, but it comes from a vector of fuzzy sets,
$\tilde{f}=\left(\tilde{{f}_{1}},\cdots ,\tilde{{f}_{r}}\right)$.
Definition 6 (Extended fuzzy graph fuzzy vector based (EFVFG)). Let $G=(V,E)$ denote a graph with $n=\leftV\right$ individuals and $m=\leftE\right$ edges. Let $\tilde{f}=\left({\tilde{f}}_{1},\cdots ,{\tilde{f}}_{n}\right)$ denote a vector of fuzzy sets in membership function form, each of them related to an individual of V. Let $D:F({\mathbb{R}}^{+})\to {\mathbb{R}}^{+}$ denote a defuzzification operator, and given the parameter $p\in (0,1]$, let ${\mu}_{f,p}$ denote the fuzzy Sugeno λmeasure obtained from $\tilde{f}$. Then, the tuple $\widehat{G}=\left(V,E,{\mu}_{f,p}\right)$ is said to be a fuzzy extended graph based on the fuzzy vector $\tilde{f}$.
Example 1. Let $\tilde{L}=\{VeryLow,Low,Medium,High,VeryHigh\}$ denote a fuzzy linguistic variable defined over the universe $U=[0,100]$, which is characterized by the corresponding membership functions ${\eta}_{VL},{\eta}_{L},{\eta}_{M},{\eta}_{H},{\eta}_{VH}:[0,100]\to [0,1]$ associated with the different linguistic terms that represent how in agreement a person is with some law denoted by $L{W}^{1}$. Let $G=(V,E)$ define a cyclic graph with $V=\{1,2,3,4,5,6,7,8\}$ and $E=\{(1,2),(2,3),(3,4),\phantom{\rule{4pt}{0ex}}(4,5),(5,6),(6,7),(7,8),(8,1)\}$, and finally, let $\tilde{f}=\left(\tilde{{f}_{1}},\dots ,\tilde{{f}_{8}}\right)=(VL,VL,L,VL,H,VH,H,VH)$ denote a vector of fuzzy sets that models the linguistic terms affinity of these $eight$ nodes of V to the law $L{W}^{1}$.
From the previous definition, it is possible to build (for any $p\in (0,1]$) the extended fuzzy graph associated with the fuzzy vector $\tilde{f}$ and the graph $G=(V,E)$, that is: $\widehat{G}=\left(V,E,{\mu}_{f,p}\right)$.
Assuming we can have more than one characteristic associated with each node in a network, we go beyond the unidimensional case by considering there is not only a vector of fuzzy sets $\tilde{f}$, but a family of them, $\left(\tilde{{f}^{1}},\cdots ,\tilde{{f}^{r}}\right)$, each of them defining some extra knowledge about the individuals.
Definition 7 (Multidimensional extended fuzzy graph fuzzy vector based (MEFVFG)). Let $G=(V,E)$ denote a graph with n nodes and m edges, and let $\left({\tilde{f}}^{1},\cdots ,{\tilde{f}}^{r}\right)$ denote a family of r independent vector of n fuzzy sets, each of them defining a type of information, so that $\forall \ell =1,\cdots ,r;i=1,\cdots ,n$ the component $\tilde{{f}_{i}^{\ell}}$ is the fuzzy set related to the characteristic ℓ and the individual i with membership function ${\eta}_{{f}_{i}^{\ell}}:{U}^{\ell}\u27f6[0,1]$. Let $D:F({\mathbb{R}}^{+})\to {\mathbb{R}}^{+}$ be a defuzzification operator (that we will assume that is the same for all characteristics ${\tilde{f}}^{\ell}$), and let ${p}^{\ell}\in (0,1]$ be a parameter in $(0,1]$.
Then, the tuple $\widehat{G}=\left(V,E,\left({\mu}_{{f}^{1},{p}^{1}},\cdots ,{\mu}_{{f}^{r},{p}^{r}}\right)\right)$ is said to be a multidimensional extended fuzzy graph (MEFVFG) based on the r fuzzy vectors ($\tilde{{f}^{1}},\dots ,\tilde{{f}^{r}}$ ).
Example 2. Let $\tilde{L}=\{VeryLow,Low,Medium,High,VeryHigh\}$ denote a fuzzy linguistic variable defined over the universe $U=[0,100]$ characterized by the corresponding membership functions ${\eta}_{VL},{\eta}_{L},{\eta}_{M},{\eta}_{H},{\eta}_{VH}:[0,100]\to [0,1]$ associated with the different linguistic terms that represent how in agreement a person is with a specific law, $L{W}^{1}$. Let $G=(V,E)$ denote a cyclic graph with $V=\{1,2,3,4,5,6,7,8\}$ and $E=\{(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,8),(8,1)\}$, and finally let $\tilde{{f}^{1}}=\left(\tilde{{f}_{1}^{1}},\dots ,\tilde{{f}_{8}^{1}}\right)=(VL,VL,L,VL,H,VH,H,VH)$ be a vector of fuzzy sets that models the linguistic terms affinity of these eight nodes to the congress proposal $L{W}^{1}$ and let $\tilde{{f}^{2}}=\left(\tilde{{f}_{1}^{2}},\dots ,\tilde{{f}_{8}^{2}}\right)=(M,M,M,M,L,L,L,H)$ be a vector of eight fuzzy sets that models the linguistic terms affinity of the eight nodes to the congressional bill, which is denoted by $L{W}^{2}$ (see Figure 2). From the previous definition, for any ${p}_{1}$,${p}_{2}\in (0,1]$, it is possible to build the MEFVFG associated with the two fuzzy vectors $(\tilde{{f}^{1}},\tilde{{f}^{2}})$ as Let us remark that this last case generalizes other existent tools, such as for example fuzzy graphs defined in [
32], which actually only define relations between connected individuals, the extended fuzzy graphs [
11], in which the additional information is about the relations between the elements but not on the individuals itself; or the [
9] whose additional information is about individuals, but it is crisp.
Figure 2.
Graph $G=(V,E)$ and fuzzy linguistic variable $\tilde{L}$.
Figure 2.
Graph $G=(V,E)$ and fuzzy linguistic variable $\tilde{L}$.
4. An Application: Social Network Analysis with Soft Information
As a specific application of the new proposed model, we take up firstly the idea introduced in [
9] about community detection in graphs taking into account the information given by a crisp vector,
x. In that preliminary work, we set up the philosophy of finding groups in a network when there is a vector of crisp values providing some additional information. Now, we generalize this idea, starting from extended fuzzy graphs that are built from
r fuzzy vectors
$\tilde{{f}^{1}},\dots ,\tilde{{f}^{r}}$.
To find “good” communities in
$\widehat{G}$, we have to extend the
Sugeno Louvain Algorithm described in [
9] to a multidimensional stage with more than one vector of additional information, with the peculiarity that the components of the vectors considered are no longer crisp values but fuzzy sets: therefore, actually, what we have is an MEFVFG. We illustrate the problem of community detection based on an MEFVFG in Example 3.
Example 3. We consider a chain with 12 nodes, represented by the crisp graph $G=(V,E)$ (see Figure 3) about which we have additional information, $({\tilde{f}}^{1},{\tilde{f}}^{2},{\tilde{f}}^{3},{\tilde{f}}^{4})$, and the defuzzification operator D, so $(D({\tilde{f}}^{1})=(\mathbf{9},\mathbf{9}.\mathbf{5},\mathbf{10},1,0.5,1,\mathbf{9}.\mathbf{5},\mathbf{8},\mathbf{10},1,1,2),\phantom{\rule{4pt}{0ex}}D({\tilde{f}}^{2})=(\mathbf{10},\mathbf{9}.\mathbf{5},\mathbf{9},1,0.5,1,\mathbf{9},\mathbf{9},\mathbf{9}.\mathbf{5},1.5,2,0.5),D({\tilde{f}}^{3})=(\mathbf{9}.\mathbf{5},\mathbf{8}.\mathbf{5},\mathbf{10},1.5,1,1,\mathbf{10},\mathbf{9},\mathbf{9}.\mathbf{5},0.9,1,1),\phantom{\rule{4pt}{0ex}}D({\tilde{f}}^{4})=(\mathbf{9},\mathbf{9},\mathbf{10},1,1,1,\mathbf{10},\mathbf{9}.\mathbf{5},\mathbf{9},0.5,1,1))$. These fuzzy sets represent the opinion of 12 people about $four$ different films. We accept that there are more synergies between those people who have similar preferences. Partition $P=\left\{\right\{1,2,3,4\},\{5,6,7,8\},\{9,10,11,12\left\}\right\}$ is obtained with any algorithm based on modularity optimization. Nevertheless, if the additional information is considered, the partition provided by the MultiDimensional fuzzy Sugeno–Louvain 1additive is ${P}^{f}=\{\{1,2,3\},\{4,5,6\},\{7,8,9\},\{10,11,12\}\}$.
Figure 3.
Chain with 12 nodes. Partitions P and ${P}^{f}$.
Figure 3.
Chain with 12 nodes. Partitions P and ${P}^{f}$.
The proposed method, named
Multidimensional Fuzzy Sugeno Louvain, is based on the Louvain algorithm [
16]. The main point is to summarize all the knowledge of the MEFVFG into two matrices:
A that represents the direct connections between the nodes (edges), and
F summarizes the additional information given by the family of vectors of fuzzy sets
$\left(\tilde{{f}^{1}},\cdots ,\tilde{{f}^{r}}\right)$. The weighted graph associated to a fuzzy Sugeno
$\lambda $measure [
9]
${\mu}_{{f}^{\ell},{p}^{\ell}}$ is essential, and it is considered in terms of a multidimensional scenario (MAWG) (one weighted graph with adjacency matrix
${F}^{\ell}$ associated to each
${\mu}_{{f}^{\ell},{p}^{\ell}}$). This methodology to find realistic partitions in an MEFVFG explained below is summarized in Algorithm 2, which includes its pseudocode, and in
Figure 4, which shows a flowchart of the process.
Step 1: definition of the MAWG. Given the fuzzy Sugeno
$\lambda $measures
$\left({\mu}_{{f}^{1},{p}^{1}},\cdots ,{\mu}_{{f}^{r},{p}^{r}}\right)$ obtained from
$\left({\tilde{f}}^{1},\cdots {\tilde{f}}^{r}\right)$ and
$\left({p}^{1},\cdots ,{p}^{r}\right)$, and the defuzzification operator
D, matrices
$\left({F}^{1},\cdots ,{F}^{r}\right)$ are calculated as
being
$\varphi :[1,1]\to [0,1]$ a bivariate aggregation operator [
33];
$S{h}_{i}({\mu}_{{f}^{\ell},{p}^{\ell}})$ and
$S{h}_{i}^{j}({\mu}_{{f}^{\ell},{p}^{\ell}})$ the Shapley values of
i on
${\mu}_{{f}^{\ell},{p}^{\ell}}$ in the presence of all the elements of
V or
$V\setminus \left\{j\right\}$, respectively [
34].
Step 2: information aggregation. Matrices ${F}^{1},\cdots ,{F}^{r}$ are aggregated to obtain the matrix F. The aggregation function $\Phi :{\Pi}^{r}\to \Pi $ is used, being $\Pi $ the set of quadratic nmatrices. Particularly, we suggest the use of a matrix aggregator based on the classical aggregation operators with element to element transformation: $F=\Phi \left({F}^{1},\cdots ,{F}^{r}\right)$.
After this aggregation process, the method
Duo Louvain has to be applied [
12,
13], considering the matrix
$M=\theta \left(A,F\right)$, being
$\theta :{\Pi}^{2}\to \Pi $ an aggregation function. That method can consider the information of two matrices when finding communities in a graph.
Algorithm 2 Multidimensional Fuzzy Sugeno–Louvain 
 1:
Input: $\left(A,\left(\phantom{\rule{4pt}{0ex}}{\tilde{f}}^{1},\cdots ,{\tilde{f}}^{r}\right),\left(\phantom{\rule{4pt}{0ex}}{p}^{1},\cdots ,{p}^{r}\right)\right)$, A represents $G=(V,E)$; ${\tilde{f}}^{\ell}$ is a vector of fuzzy sets; ${p}^{\ell}\in [0,1)$, $\forall \ell =1,\cdots ,r$;  2:
Output: P;  3:
Preliminary  4:
for$(\ell =1)$$(r)$do  5:
Calculate ${\mu}_{{f}^{\ell},{p}^{\ell}}$ (fuzzy Sugeno $\lambda $measure from ${\tilde{f}}^{\ell}$);  6:
${F}_{ij}^{\ell}\leftarrow \varphi \left(S{h}_{i}({\mu}_{{f}^{\ell},{p}^{\ell}})S{h}_{i}^{j}({\mu}_{{f}^{\ell},{p}^{\ell}}),S{h}_{j}({\mu}_{{f}^{\ell},{p}^{\ell}})S{h}_{j}^{i}({\mu}_{{f}^{\ell},{p}^{\ell}})\right)$, $\forall i,j\in V$;  7:
end for  8:
$F\leftarrow \Phi \left({F}^{1},\cdots ,{F}^{r}\right)$;  9:
$M\leftarrow \theta \left(A,F\right)$;  10:
end Preliminary  11:
$P\leftarrow $Duo Louvain$\left(A,M\right)$;  12:
return$\left(P\right)$;

Figure 4.
Flowchart of the methodology Multidimensional Fuzzy Sugeno–Louvain.
Figure 4.
Flowchart of the methodology Multidimensional Fuzzy Sugeno–Louvain.
Remark 1. The concept of “what is a good group” depends on the operator Φ
applied. In the case that Φ
is a disjunctive operator, groups are composed by elements among which there are strong synergies regarding any evidence or characteristic (any fuzzy vector). The size of the groups that are somehow similar regarding the additional information will increase the more vectors are considered. In contrast, where Φ
is a conjunctive operator, the groups are composed by elements among which there are strong synergies in all the evidence or characteristics. The size of the groups that are somehow similar regarding the additional information will increase the less vectors are considered. Particularly, we consider the most popular ordered weighted averaging aggregation operators, OWA [35]: maximum, minimum and average. As in the unidimensional problem with crisp information addressed in [
9], the exponential complexity concerning the calculation of the Shapley value may be avoided by considering an additive fuzzy measure. For this reason, in this paper, we suggest the specific characterization of
${\mu}_{{f}^{\ell},{p}^{\ell}}$ when
$p=1$. On this basis, as
${\mu}_{{f}^{\ell}}^{a}$ is a 1—additive fuzzy measure [
31], it holds:
In this context, we propose a specific application of the Algorithm
Multidimensional Fuzzy Sugeno–Louvain. For every
$\ell =1,\cdots ,r$, the characterization of
${\mu}_{{f}^{\ell}}^{a}$ only depends on the calculation of
${F}^{\ell}$. Then, the complexity of the method
1additive Multidimensional Fuzzy Sugeno–Louvain is equal to that of the Louvain algorithm (Algorithm 3) [
16].
Algorithm 3 1additive Multidimensional Sugeno–Louvain 
 1:
Input: $\left(A,\phantom{\rule{4pt}{0ex}}\left({\tilde{f}}^{1},\cdots ,{\tilde{f}}^{r}\right)\right)$, A is a representation of $G=(V,E)$; ${\tilde{f}}^{\ell}$ is a vector of fuzzy sets, $\forall \ell =1,\cdots ,r$;  2:
Output: P;  3:
Preliminary  4:
for$(\ell =1)$$(r)$do  5:
${F}_{ij}^{\ell}\leftarrow \varphi \left\{\right\frac{D({\tilde{f}}_{i}^{\ell})}{{\sum}_{{\scriptstyle \begin{array}{c}k=1\end{array}}}^{n}D({\tilde{f}}_{k}^{\ell})}\frac{D({\tilde{f}}_{i}^{\ell})}{{\sum}_{{\scriptstyle \begin{array}{c}k=1k\ne j\end{array}}}^{n}D({\tilde{f}}_{k}^{\ell})},\phantom{\rule{4pt}{0ex}}\frac{D({\tilde{f}}_{j}^{\ell})}{{\sum}_{{\scriptstyle \begin{array}{c}k=1\end{array}}}^{n}D({\tilde{f}}_{k}^{\ell})}\frac{D({\tilde{f}}_{j}^{\ell})}{{\sum}_{{\scriptstyle \begin{array}{c}k=1k\ne i\end{array}}}^{n}D({\tilde{f}}_{k}^{\ell})}\left\right\}$;  6:
end for  7:
$F\leftarrow \Phi \left({F}^{1},\cdots ,{F}^{r}\right)$;  8:
$M\leftarrow \theta \left(A,F\right)$;  9:
end Preliminary  10:
$P\leftarrow $Duo Louvain$\left(A,M\right)$;  11:
return$\left(P\right)$;

Example 4. We illustrate the idea of our methodology in a simple case considering there is a vector of fuzzy sets. Let us consider the situation described in Example 1 in which we have a cycle of eight nodes and the information associated to each node is described in linguistic terms $\tilde{f}=\left(\tilde{{f}_{1}},\dots ,\tilde{{f}_{8}}\right)=(VL,VL,L,VL,H,VH,H,VH)$. The whole information is summarized in the Figure 5. Now, let us assume that these linguistic fuzzy variables defined over $U=[0,100]$ are modeled in terms of the following four fuzzy trapezoidal sets. $\tilde{VL}=(0,0,10,25)$, $\tilde{L}=(5,10,20,25)$, $\tilde{M}=(30,40,60,70)$, $\tilde{H}=(60,70,80,100)$, $\tilde{VH}=(75,90,100,100)$. It is possible to see that if we apply the Fuzzy Sugeno–Louvain 1additive, just in the unidimensional so we only have one matrix ${F}^{\ell}$, to this extended fuzzy graph the partition obtained for any p is ${P}^{f}=\{\{1,2,3,4\},\{4,5,6,7,8\}\}$.
Figure 5.
Graph $G=(V,E)$ and fuzzy linguistic variable $\tilde{L}$.
Figure 5.
Graph $G=(V,E)$ and fuzzy linguistic variable $\tilde{L}$.
5. Computational Results
When a new method is proposed, an evaluation of its performance is required. This process can be addressed comparing the results obtained with the method under evaluation with respect to other proposals established in the literature to solve the same problem. Nevertheless, in our case, as the community detection problem with additional soft information has never been faced before, we cannot compare our method with other proposals of the literature. Then, we work on an evaluation process. For this, we consider several reference models [
36] to which we apply our methodology, whose performance is quantified with the calculation of the Normalized Mutual Information (NMI) [
37].
Definition 8 (Normalized Mutual Information(NMI) [
37]).
Let $X={\left\{{x}_{i}\right\}}_{i\in V}$ and $Y={\left\{{y}_{i}\right\}}_{i\in V}$ denote two disjoint partitions of the graph $G=\left(V,E\right)$. Let $P(x)$ denote the probability that a random node is assigned to the community x, and let $P(x,y)$ denote the conditional probability that a random node is assigned to the community x in the partition X and assigned to the community y in the partition Y. The Shannon entropy of X is calculated as $H(X)={\sum}_{x}P(x)log(P(x))$; and the Shannon entropy of X and Y is calculated as $H\left(X,Y\right)={\sum}_{x}{\sum}_{y}P(x,y)log(P(x,y))$. The Mutual Information ($MI$) among partitions X and Y is defined as:Then, $NMI$ is a normalization of Equation (5). Although there can be some issues with the basic version of the measure [
38], we consider this measure because, to the best of our knowledge, it is fair enough to compare how similar two partitions are; i.e., NMI allows us to quantify how much the partition provided by our method resembles the considered standard partition.
In all the benchmark models we present, there are two components: the adjacency matrix
A and the additional information matrix
F, which are obtained from some aggregation of a family of vectors of soft information about the individuals. That component about the synergies is defined from multiple vectors. The generation of these vectors is based on the use of trapezoidal fuzzy sets [
39].
5.1. Experiment Design
Following the idea in [
2], then, we explain how we generate the benchmark models. Each one will represent an MEFVFG with
$n=256$ nodes. This process has two main parts: the definition of the adjacency matrix and the generation of the additional information.
To approach the manipulation of multiple vectors from a benchmarking perspective, we propose the following: in every vector, the value of each component depends on certain trapezoidal fuzzy sets, specifically saying low and high fuzzy sets. Low fuzzy sets are related to the generation of the components of each vector which imply scarce connections among the nodes, whereas high fuzzy sets refer to the generation of the components of the vectors which imply many connections among the nodes. Therefore, in each vector, the component related to nodes which are in the same community are generated as high fuzzy sets, whereas the components related to nodes of different communities are generated as low fuzzy sets. Let us emphasize that in the simulation process presented, what we randomly generate are the values $D({\tilde{f}}^{i})$ as high or low depending on the trapezoidal fuzzy sets ${\tilde{f}}^{i}$, being D a defuzzification operator.
We have
r vectors of fuzzy sets as the starting point in each benchmark, where
r is the amount of communities embedded in the synergies matrix,
F. Each vector is associated with a community
${C}_{i}$, so nodes belonging to
${C}_{i}$ have a
high value in the vector
${\tilde{f}}^{i}$, whereas nodes which are not in
${C}_{i}$ have a
low value in
${\tilde{f}}^{i}$:
$D({\tilde{f}}_{j}^{i})=\hslash $, if
$j\in {C}_{i}$;
$D({\tilde{f}}_{j}^{i})=\ell $, if
$j\notin {C}_{i}$. The process for defining and simulating these trapezoidal fuzzy sets is detailed below. To analyze different scattering of the
ℓ and
ℏ fuzzy sets, several combinations of the parameters
a,
b,
c and
d are considered (see
Figure 6 and
Figure 7). For example, to define a benchmark graph with
$four$ communities, we have to generate
$four$ nvectors
$\left(D({\tilde{f}}^{1}),D({\tilde{f}}^{2}),D({\tilde{f}}^{3}),D({\tilde{f}}^{4})\right)$ with
$n=256$ nodes.
Each benchmark model represents an MEFVFG summarized into two matrices: one of direct connections (adjacency A) and another of additional information (synergies matrix F, obtained from the soft information vectors). Below, we explain the generation of them.
 1
Adjacency matrix. The adjacency matrix
A is randomly generated according to Equation (
7) for a set
V with 256 nodes. We consider different combinations of the values of parameters
$\alpha $ and
$\beta $ regarding the input/output values (
${z}_{in}$ and
${z}_{out}$), as shown in
Table 1 (similarly to the proposal in [
2]). These parameters regulate the density of the connections matrix,
A, whose generation process is shown in Algorithm 4.
Table 1.
Parameters used to generate the adjacency matrix A of each model.
Table 1.
Parameters used to generate the adjacency matrix A of each model.
 Network 1  Network 2  Network 3  Network 4  Network 5  Network 6  Network 7  Network 8  Network 9 

$\mathbf{\alpha}$  0.45  0.4  0.35  0.325  0.3  0.275  0.25  0.225  0.2 
$\mathbf{\beta}$  0.016  0.033  0.05  0.058  0.066  0.075  0.083  0.091  0.1 
Algorithm 4 Generate Adjacency 
 1:
Input: $\left({C}_{1},\cdots {C}_{r}\right),\alpha ,\beta ,n$;  2:
Output: A;  3:
$A\left(i,j\right)\leftarrow 0$, $\forall i,j=1,\cdots ,n$;  4:
for$(i=1)$$(n)$do  5:
for $(i=1)$$(n)$ do  6:
for $(\ell =1)$$(r)$ do  7:
$\u03f5\leftarrow rand(0,1)$;  8:
if $\left({C}_{\ell 1}<i\le {C}_{\ell}\right)$$and$$\left({C}_{\ell 1}<j\le {C}_{\ell}\right)$ then  9:
if $\u03f5<\alpha $ then  10:
$A(i,j)\leftarrow 1;$  11:
end if  12:
else  13:
if $\u03f5<\beta $ then  14:
$A(i,j)\leftarrow 1;$  15:
end if  16:
end if  17:
end for  18:
end for  19:
end for  20:
return$(A)$;

 2
Low trapezoidal fuzzy sets generation. This type of fuzzy sets
${\tilde{f}}^{i}$, shown in
Figure 6, are generated to represent, in each vector
$D({\tilde{f}}^{i})$, the components related to the elements with a
low value in the characteristic of the corresponding vector.
Figure 6.
Low trapezoidal fuzzy set.
Figure 6.
Low trapezoidal fuzzy set.
After fixing the values a and b, we can calculate the lines ${r}_{1}$ and ${r}_{2}$ to obtain a trapezoid below them with area 1. Particularly, ${r}_{1}$ is defined as $y=h$, where h is a value chosen so that the value of the corresponding integral is 1: $1=ah+\left(\frac{ba}{2}\right)h\u27f9h=\frac{2}{a+b}$.
On the other hand,
${r}_{2}$ is the line through the points
$\left(\frac{2}{a+b},a\right)$ and
$\left(0,b\right)$, so
By isolating
$\alpha $ and
$\beta $,
$\alpha =\frac{2b}{\left(a+b\right)\left(ab\right)}$ and
$\beta =\frac{2}{\left(a+b\right)\left(ab\right)}$, the distribution function of the
low trapezoidal fuzzy set is:
where
Once the low fuzzy set is characterized, in the following denoted by ℓ, we apply the inverse method. First, we have to calculate the inverse function of F, ${F}^{1}\left(x\right)$; then, we simulate a value between 0 and 1 (p). Finally, ${F}^{1}\left(p\right)$ is the value assigned to an edge which connect nodes which are not in the same community.
If $p\le \frac{2a}{a+b}\u27f9p=\frac{2x}{a+b}\u27f9x=\frac{\left(a+b\right)p}{2}$.
If $p>\frac{2a}{a+b}\u27f9p=\frac{2a}{a+b}+\frac{{\left(xb\right)}^{2}{\left(ab\right)}^{2}}{\left(a+b\right)\left(ab\right)}\u27f9x=b\sqrt{\left(p\frac{2a}{a+b}\right)\left(a+b\right)\left(ab\right)+{\left(ab\right)}^{2}}$.
We take the sign ‘−’ because $xb<0$.
Then, if
$p\u21ddU(0,1)$, the
low values considered are obtained as:
This process is summarized in Algorithm 5.
Algorithm 5 Low Fuzzy Set 
 1:
Input: $a,b$;  2:
Output: ℓ;  3:
$p\leftarrow rand(0,1)$;  4:
if$p\le \frac{2a}{a+b}$then  5:
$\ell \leftarrow \frac{\left(a+b\right)p}{2}$;  6:
else  7:
$\ell \leftarrow b\sqrt{\left(p\frac{2a}{a+b}\right)\left(a+b\right)\left(ab\right)+{\left(a+b\right)}^{2}}$;  8:
end if  9:
return$\left(\ell \right)$;

 3
High trapezoidal fuzzy sets generation. This type of fuzzy sets,
${\tilde{f}}^{i}$, shown in
Figure 7, are generated to represent, in each vector
$D({\tilde{f}}^{i})$, the components related to the elements with a
high value in the characteristic of the corresponding vector.
Figure 7.
High trapezoidal fuzzy set.
Figure 7.
High trapezoidal fuzzy set.
After fixing the values c and d, we can calculate the lines ${r}_{3}$ and ${r}_{4}$ to obtain a trapezoid below them with area 1. Particularly, ${r}_{3}$ is defined as $y=h$, where h is a value chosen so that the value of the corresponding integral is 1: $1=\frac{\left(dc\right)h}{2}\left(1d\right)\times h\u27f9h=\frac{2}{\left(1d\right)+\left(1c\right)}$
On the other hand,
${r}_{4}$ is the line through points
$\left(\frac{2}{\left(1d\right)+\left(1c\right)},d\right)$ and
$\left(0,c\right)$, so:
By isolating
$\alpha $ and
$\beta $,
$\alpha =\frac{2c}{\left[\left(1d\right)+\left(1c\right)\right]\left(dc\right)}$ and
$\beta =\frac{2}{\left[\left(1d\right)+\left(1c\right)\right]\left(dc\right)}$, so the distribution function of the
high trapezoidal fuzzy set is:
where
${\int}_{c}^{x}\frac{2c+2z}{\left(\left(1d\right)+\left(1c\right)\right)\left(dc\right)}dz=\frac{{\left(xc\right)}^{2}}{\left(\left(1d\right)+\left(1c\right)\right)\left(dc\right)}$;
${\int}_{c}^{d}\left(\frac{2c+2z}{\left(\left(1d\right)+\left(1c\right)\right)\left(dc\right)}\right)dz+{\int}_{d}^{x}\frac{2}{\left(1d\right)+\left(1c\right)}=\frac{\left(xd\right)+\left(xc\right)}{\left(1d\right)+\left(1c\right)}$.
As with low fuzzy sets, we apply the inverse method to simulate the values of the high fuzzy sets (ℏ in the following). Then, the value ${F}^{1}\left(p\right)$ is:
If $p\le \frac{dc}{\left(1d\right)+\left(1c\right)}\phantom{\rule{4pt}{0ex}}\u27f9\phantom{\rule{4pt}{0ex}}p=\frac{{\left(xc\right)}^{2}}{\left(\left(1d\right)+\left(1c\right)\right)\left(dc\right)}\u27f9\phantom{\rule{4pt}{0ex}}x=c+\sqrt{p\left(dc\right)\left[\left(1d\right)+\left(1c\right)\right]}$;
If $p>\frac{dc}{\left(1d\right)+\left(1c\right)}\phantom{\rule{4pt}{0ex}}\u27f9\phantom{\rule{4pt}{0ex}}p=\frac{\left(xd\right)+\left(xc\right)}{\left(1d\right)+\left(1c\right)}\u27f9\phantom{\rule{4pt}{0ex}}x=\frac{p\left(\left(1d\right)+\left(1c\right)+d+c\right)}{2}$.
We take the sign ‘+’ because $xd>0$.
Then, if
$p\u21ddU(0,1)$, the
high values considered are obtained as:
This process is summarized in Algorithm 6.
Algorithm 6 High Fuzzy Set 
 1:
Input: $c,d$;  2:
Output: ℏ;  3:
$p\leftarrow rand(0,1)$;  4:
if$\left(p\le \frac{dc}{(1c)+(1d)}\right)$then  5:
$\hslash \leftarrow c+\sqrt{p(dc)((1c)+(1d))}$;  6:
else  7:
$\hslash \leftarrow \frac{p\left(\left(1c\right)+\left(1d\right)\right)+c+d}{2}$;  8:
end if  9:
return$\left(\hslash \right)$;

 4
Generate multiple vectors. In each benchmark model, we have
r vectors as the starting point, where
r is the amount of communities embedded in the synergies matrix,
F. Each vector is associated with a community
${C}_{i}$, so that nodes belonging to
${C}_{i}$ will have a
high value in
${\tilde{f}}^{i}$, whereas the nodes which are not in
${C}_{i}$ will have a
low value in
${\tilde{f}}^{i}$ (then
$D({\tilde{f}}_{j}^{i})=\hslash $, if
$j\in {C}_{i}$;
$D({\tilde{f}}_{j}^{i})=\ell $, if
$j\notin {C}_{i}$). Different combinations of the parameters
a,
b,
c and
d are considered to generate the
low/
high fuzzy sets (see
Table 2). These combinations affect the scattering of the
ℓ and
ℏ fuzzy sets. The process is summarized in Algorithm 7.
Algorithm 7 Generate Multiple Vectors 
 1:
Input: $\left({C}_{1},\cdots {C}_{r}\right),a,b,c,d$;  2:
Output: $multipleVectors$;  3:
${C}_{0}\leftarrow 0$;  4:
$multipleVectors\leftarrow 0;$ (matrix $r\times n$, the line ℓ represents the vector $D({\tilde{f}}^{\ell})$)  5:
for$(\ell =1)$$(r)$do  6:
for $(i=1)$$(n)$ do  7:
if ${C}_{\ell 1}<i\le {C}_{\ell}$ then  8:
$multipleVectors(\ell ,i)\leftarrow HighFuzzyset(c,d)$;  9:
else  10:
$multipleVectors(\ell ,i)\leftarrow LowFuzzyset(a,b)$;  11:
end if  12:
end for  13:
end for  14:
return$(multipleVectors)$;

Table 2.
Parameters to generate the matrix F of the benchmark model.
Table 2.
Parameters to generate the matrix F of the benchmark model.
 Case 1  Case 2  Case 3  Case 4  Case 5  Case 6  Case 7  Case 8  Case 9 

$\mathbf{a}$  0  0  0  0.1  0.1  0.1  0.2  0.2  0.2 
$\mathbf{b}$  0.1  0.1  0.1  0.2  0.2  0.2  0.3  0.3  0.3 
$\mathbf{c}$  0.9  0.8  0.7  0.9  0.8  0.7  0.9  0.8  0.7 
$\mathbf{d}$  1  0.9  0.8  1  0.9  0.8  1  0.9  0.8 
 5
Synergies matrix. From vectors generated with the Algorithm Generate Multiple Vectors, we obtain $\left({\mu}_{{f}^{1}}^{a},\cdots ,{\mu}_{{f}^{r}}^{a}\right)$. We consider the matrices $\left({F}^{1},\cdots ,{F}^{r}\right)$ and the adjacency of the corresponding MAWG. The second component of each benchmark is an aggregation of these matrices, $F=\Phi \left({F}^{1},\cdots ,{F}^{r}\right)=max\left({F}^{1},\cdots ,{F}^{r}\right)$. We summarize this process in Algorithm 8 for the particular case $p=1$.
Algorithm 8 Matrix From Multiple Vectors 
 1:
Input: $\left({C}_{1},\cdots {C}_{r}\right),a,b,c,d$;  2:
Output: F;  3:
$multipleVectors\leftarrow GenerateMultipleVectors\left(\left({C}_{1},\cdots {C}_{r}\right),a,b,c,d\right);$  4:
for$(\ell =1)$$(r)$do  5:
for $(i=1)$$(n)$ do  6:
$Sh(\ell ,i)\leftarrow \frac{multipleVectors\left(\ell ,i\right)}{{\sum}_{k=1}^{n}multipleVectors\left(\ell ,k\right)};$  7:
for $(j=1)$$(n)$ do  8:
$S{h}_{j}(\ell ,i)\leftarrow \frac{multipleVectors\left(\ell ,i\right)}{{\sum}_{\begin{array}{c}\underset{k\in V}{k\ne i}\end{array}}^{n}multipleVectors\left(\ell ,k\right)};$  9:
end for  10:
end for  11:
end for  12:
for$(\ell =1)$$(r)$do  13:
for $(i=1)$$(n)$ do  14:
for $(j=1)$$(n)$ do  15:
${F}^{\ell}\left(i,j\right)\leftarrow min\left\{\rightSh(\ell ,i)S{h}_{j}(\ell ,i),Sh(\ell ,j)S{h}_{i}(\ell ,j)\left\right\}$;  16:
end for  17:
end for  18:
end for  19:
$F\leftarrow max\{{F}^{1},\cdots ,{F}^{r}\};$  20:
return$(F)$;

5.2. Results
Then, we show the evaluation of the proposed methodology in the 1additive stage. We do this to avoid exponential complexity in computing fuzzy measures. Nevertheless, there is no reason to think that the goodness of the partitions obtained, and therefore the accuracy of the evaluated method, will worsen if nonadditive measures are considered. To do so, we consider several structures which vary in size and number of groups. Each of them represents an MEFVFG, $\widehat{G}=\left(V,E,\left({\mu}_{{f}^{1},{p}^{1}},\cdots ,{\mu}_{{f}^{r},{p}^{r}}\right)\right)$ with two independent components. One of them, $G=\left(V,E\right)$, is related to the direct connections among the nodes represented by edges. The other, $\left({\mu}_{{f}^{1},{p}^{1}},\cdots ,{\mu}_{{f}^{r},{p}^{r}}\right)$, is used to define a relations matrix F.
For each combination of $\alpha $ and $\beta $; a, b, c and d, we analyze the linear combination $M=\theta \left(A,F\right)=\gamma A+\left(1\gamma \right)F$, by considering $\gamma =0$ (this is the only case in which, including the additional information, we can assert the partition which should be obtained).
In
Table 3,
Table 4,
Table 5 and
Table 6, we show the average of the NMI obtained from 100 iterations of each combination of
$\alpha $ and
$\beta $, concerning matrix
A, and the parameters
a,
b,
c and
d for the definition of the vectors which give rise to the synergies matrix
F. To simplify the interpretation of the results, these tables display the values in different colors: the closer the value is to 1 (i.e., the better the result), the lighter the color.
Benchmark graph. Model 1. It is the simpler benchmark model, showed in the
Figure 8. The adjacency matrix has two communities with an expected size of 128 each, being
$<k>=128\alpha +128\beta $ the expected degree of each node.
${F}_{1}$ is obtained from vectors
$\left(D({\tilde{f}}^{1}),D({\tilde{f}}^{2}),D({\tilde{f}}^{3}),D({\tilde{f}}^{4})\right)$, so the 256 nodes are organized into four groups
${C}_{1}^{F},\cdots ,{C}_{4}^{F}$ with expected size
${C}_{i}^{F}=64$. In
Table 3, we show the results. Note that the tested algorithm always recovers the standard partition, even when the networks are sparse.
Figure 8.
Benchmark graph. Model 1.
Figure 8.
Benchmark graph. Model 1.
Table 3.
NMI. Model 1.
NMI  ${\mathrm{F}}_{1}$
Case 1  ${\mathrm{F}}_{1}$
Case 2  ${\mathrm{F}}_{1}$
Case 3  ${\mathrm{F}}_{1}$
Case 4  ${\mathrm{F}}_{1}$
Case 5  ${\mathrm{F}}_{1}$
Case 6  ${\mathrm{F}}_{1}$
Case 7  ${\mathrm{F}}_{1}$
Case 8  ${\mathrm{F}}_{1}$
Case 9 

${\mathbf{A}}_{\mathbf{1}}$
Network 1  1
 1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 2  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 3  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 4  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 5  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 6  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 7  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 8  1  1  1  1  1  1  1  1  1 
${\mathbf{A}}_{\mathbf{1}}$
Network 9  1  1  1  1  1  1  1  1  1 
Benchmark graph. Model 2. Due to modularity resolution limit [
40], the Louvain algorithm is very sensitive to changes in the groups’ size, particularly with small communities [
41]. Then, we test the algorithm in this context, where the groups of the standard structure are smaller than in Model 1, as it can be seen in the
Figure 9.
${F}_{2}$ has eight communities with 32 nodes each. We also reduce the communities in
${A}_{2}$: it has four communities with 64 nodes each, being
$<k>=64\alpha +192\beta $ the expected degree. The obtained results are shown in
Table 4. Despite the size reduction, our method provides good results.
Figure 9.
Benchmark graph. Model 2.
Figure 9.
Benchmark graph. Model 2.
Table 4.
NMI. Model 2.
NMI  ${\mathrm{F}}_{2}$
Case 1  ${\mathrm{F}}_{2}$
Case 2  ${\mathrm{F}}_{2}$
Case 3  ${\mathrm{F}}_{2}$
Case 4  ${\mathrm{F}}_{2}$
Case 5  ${\mathrm{F}}_{2}$
Case 6  ${\mathrm{F}}_{2}$
Case 7  ${\mathrm{F}}_{2}$
Case 8  ${\mathrm{F}}_{2}$
Case 9 

${\mathbf{A}}_{\mathbf{2}}$
Network 1  1  1  1  1  1  1  0.9987  0.9981  0.9994 
${\mathbf{A}}_{\mathbf{2}}$
Network 2  1  1  1  1  1  1  0.9986  0.9980  0.9994 
${\mathbf{A}}_{\mathbf{2}}$
Network 3  1  1  1  1  1  1  0.9993  0.9992  0.9994 
${\mathbf{A}}_{\mathbf{2}}$
Network 4  1  1  1  1  1  1  0.9986  0.9980  0.9996 
${\mathbf{A}}_{\mathbf{2}}$
Network 5  1  1  1  1  1  1  0.9984  0.9990  0.9991 
${\mathbf{A}}_{\mathbf{2}}$
Network 6  1  1  1  1  1  1  0.9986  0.9984  0.9990 
${\mathbf{A}}_{\mathbf{2}}$
Network 7  1  1  1  1  1  1  0.9990  0.9992  0.9992 
${\mathbf{A}}_{\mathbf{2}}$
Network 8  1  1  1  1  1  1  0.9968  0.9992  0.9989 
${\mathbf{A}}_{\mathbf{2}}$
Network 9  1  1  1  1  1  1  0.9993  0.9992  0.9996 
Benchmark graph. Model 3. Previous results set light on the high quality of the tested method in symmetric structures. However, the interest of every method goes further than synthetic structures; the main objective is to reach proper results in real cases. Then, we work with asymmetric structures to simulate more realistic networks, as it can be seen in the
Figure 10.
${F}_{3}$ has four communities whose sizes are
${C}_{1}^{F}=43$,
${C}_{2}^{F}=42$,
${C}_{3}^{F}=43$,
${C}_{4}^{F}=96$,
${C}_{5}^{F}=32$. On the other hand,
${A}_{3}={A}_{1}$. We show the results in
Table 5.
Figure 10.
Benchmark graph. Model 3.
Figure 10.
Benchmark graph. Model 3.
Table 5.
NMI. Model 3.
NMI  ${\mathrm{F}}_{3}$
Case 1  ${\mathrm{F}}_{3}$
Case 2  ${\mathrm{F}}_{3}$
Case 3  ${\mathrm{F}}_{3}$
Case 4  ${\mathrm{F}}_{3}$
Case 5  ${\mathrm{F}}_{3}$
Case 6  ${\mathrm{F}}_{3}$
Case 7  ${\mathrm{F}}_{3}$
Case 8  ${\mathrm{F}}_{3}$
Case 9 

${\mathbf{A}}_{\mathbf{3}}$
Network 1  1  1  1  1  1  1  0.9996  0.9994  0.9997 
${\mathbf{A}}_{\mathbf{3}}$
Network 2  1  1  1  1  1  1  0.9997  1  1 
${\mathbf{A}}_{\mathbf{3}}$
Network 3  1  1  1  1  1  1  0.9997  1  1 
${\mathbf{A}}_{\mathbf{3}}$
Network 4  1  1  1  1  1  1  0.9990
 1  1 
${\mathbf{A}}_{\mathbf{3}}$
Network 5  1  1  1  1  1  1  1  0.9996  0.9992 
${\mathbf{A}}_{\mathbf{3}}$
Network 6  1  1  1  1  1  1  1  0.9994  0.9994 
${\mathbf{A}}_{\mathbf{3}}$
Network 7  1  1  1  1  1  1  0.9997  0.9996  0.9997 
${\mathbf{A}}_{\mathbf{3}}$
Network 8  1  1  1  1  1  1  1  1  0.9994 
${\mathbf{A}}_{\mathbf{3}}$
Network 9  1  1  1  1  1  1  1  1  0.9995 
Benchmark graph. Model 4. This model combines the reduction of the size communities with partition asymmetry, as it can be seen in the
Figure 11. In this case,
${A}_{4}={A}_{2}$, and
${F}_{4}$ has eight communities whose expected sizes are
${C}_{1}^{F}=24$,
${C}_{2}^{F}=40$,
${C}_{3}^{F}=64$,
${C}_{4}^{F}=21$,
${C}_{5}^{F}=22$,
${C}_{6}^{F}=21$,
${C}_{7}^{F}=32$ y
${C}_{8}^{F}=32$. Despite the obvious complexity of this structure, the results presented in
Table 6 show the good performance of the tested algorithm.
Figure 11.
Benchmark graph. Model 4.
Figure 11.
Benchmark graph. Model 4.
Table 6.
NMI. Model 4.
NMI  ${\mathrm{F}}_{4}$
Case 1  ${\mathrm{F}}_{4}$
Case 2  ${\mathrm{F}}_{4}$
Case 3  ${\mathrm{F}}_{4}$
Case 4  ${\mathrm{F}}_{4}$
Case 5  ${\mathrm{F}}_{4}$
Case 6  ${\mathrm{F}}_{4}$
Case 7  ${\mathrm{F}}_{4}$
Case 8  ${\mathrm{F}}_{4}$
Case 9 

${\mathbf{A}}_{\mathbf{4}}$
Network 1  0.9975  0.9969  0.9974  0.9976  0.9970  0.9968  0.9937  1  0.9994 
${\mathbf{A}}_{\mathbf{4}}$
Network 2  0.9956  0.9960  0.9962  0.9977  0.9982  0.9981  0.9960  0.9991  1 
${\mathbf{A}}_{\mathbf{4}}$
Network 3  0.9951  0.9964  0.9956  0.9980  0.9972  0.9970  0.9974  0.9975  0.9997 
${\mathbf{A}}_{\mathbf{4}}$
Network 4  0.9949  0.9964  0.9963  0.9978  0.9963  0.9980  0.9943  0.9990  0.9973 
${\mathbf{A}}_{\mathbf{4}}$
Network 5  0.9952  0.9956  0.9975  0.9960  0.9978  0.9976  0.9957  0.9988  0.9979 
${\mathbf{A}}_{\mathbf{4}}$
Network 6  0.9954  0.9957  0.9972  0.9967  0.9954  0.9989  0.9973  0.9961  0.9990 
${\mathbf{A}}_{\mathbf{4}}$
Network 7  0.9944  0.9955  0.9978  0.9971  0.9969  0.9971  0.9980  0.9989  1 
${\mathbf{A}}_{\mathbf{4}}$
Network 8  0.9952  0.9958  0.9973  0.9974  0.9969  0.9980  0.9996  0.9984  1 
${\mathbf{A}}_{\mathbf{4}}$
Network 9  0.9952  0.9967  0.9974  0.9965  0.9969  0.9972  0.9972  0.9968  1 
6. Discussion and Conclusions
Within the framework of the analysis of networks and social networks, this work is based on the analysis of fuzzy information. We can find different main contributions in this article. Our first objective is the definition of a new representation model, whose base is made up of two types of information sources. The first is a set of individuals whose direct connections are known and represented by a crisp graph or network. The other part deals with some additional knowledge about these individuals in terms of soft information represented by family of vectors of fuzzy sets. On this basis, we define the multidimensional extended fuzzy graph based on fuzzy vectors (MEFVFG). This new model combines the crisp information of a graph with the soft information provided by the vectors about the individuals. We define it from the simplest case where there is only one vector of fuzzy sets to the multidimensional scenario involving multiple vectors.
Another goal of this work is the proposal of a specific application of this new model related to community detection. This problem has been previously addressed in terms of fuzzy measures that provide additional information about the synergies among the individuals in some works [
12,
13,
28]. In [
11], we proposed a methodology, named
DuoLouvain, to find a “good” partition of the individuals of an extended fuzzy graph considering both the connections defined by the edges and also the additional information provided by the fuzzy measures. It was based on the wellknown Louvain method [
16], a greedy multiphase algorithm based on local moving [
42] and modularity optimization [
25]. That proposal [
11] is the inspiration of this paper. Now, we work on the community detection in networks by considering additional soft information about the individuals of the network. Specifically, we face the existence of several fuzzy sets related to the nodes of a network, so our proposed application of the MEFVFG in current work is to obtain realistic communities in it. That idea is quite useful and goes beyond any other previous proposal, as it can be applied in a wide range of scenarios, for example, when any linguistic variable(s) appear. As far as we know, this situation has never been faced before, so this work intrinsically leads to the definition of a new type of problem.
Another important objective is the evaluation of the developed methodology. As mentioned above, the problem presented in this article has not been addressed before in the literature, so there are no other methods with which we can compare our proposal. Then, to evaluate the performance of the new algorithm, we present some experimental results developed on the basis of benchmarking [
36] and NMI calculation [
37]. We develop some methods based on trapezoidal fuzzy sets with which we generate the elements of the gold models considered, each with a standard partition summarizing an MEFVFG, which should be detected by the evaluated algorithm. The high level of the results shown in
Section 5 allows us to assert the good performance of the proposed method: the NMI calculated in almost all the scenarios is 1, which means that our algorithm perfectly detects the standard partition despite the complexity of the considered model.
As further research, we stress the importance of an indepth analysis of the distance between fuzzy sets. Specifically, we are interested in analyzing how far two fuzzy sets are in order to compute new measures of additional information to be later considered in the presented methodology. Our idea is to work with the Hausdorff distance between two fuzzy sets [
43], based on the classic metric with the same name [
44], which is used in mathematics to quantify how far two subsets of a metric space are from each other. To approach this theoretical approach, it is essential to be familiar with the properties of fuzzy sets and also with various topological concepts related to the measurement of distances in different spaces.
Another important line of future work is not so theoretical but applied. Let us emphasize the importance of applying this methodology in reallife cases in order to obtain realistic groups of individuals which not only consider the direct connections between them but also some additional soft information. Real problems are too complex to be represented by a crisp graph alone. The need to include as many sources of information as possible is clear. For example, in the behavior of people, things are not “black or white”. To the question “do you agree with this law?”, the answer may be something like “well, more or less but not quite”. The more capable we are of representing these situations in a model, the more realistic the results obtained will be. We have to be prepared to understand, model, and analyze the fuzzy knowledge of real life, for example, by the consideration of linguistic terms. Undeniably, it is worth an indepth analysis of the linguistic terms that accompany any real problem, for whose study the tools and methodology proposed here can be crucial. When facing this type of problem, it is vitally important to take into account its difficulty, both computationally and in terms of understanding. When fuzzy elements appear, it is essential to be well prepared to consider tools that mitigate the intrinsic difficulty.