1. Introduction
The community detection problem consists of identifying the community structure within a given network. This typically consists of providing a partition of the nodes in the given network. Here, we present a brief overview of the historical and recent developments of community detection.
1.1. Foundational Concepts and Early Methods
The notion of
community in a graph was formalized in the early 2000s. In 2002, Girvan and Newman introduced the edge betweenness algorithm, which removes edges with the highest betweenness scores iteratively to uncover community structures [
1]. The concept of modularity and its application to community detection was also introduced by Newman and Girvan in 2004 [
2], and for larger networks by Clauset, Newman, and Moore in [
3]. Modularity is a real number associated with a given partition that measures how strongly the partition divides the network into communities. A larger modularity score indicates higher edge density within individual communities and lower edge density between different communities. Communities are then detected by finding a partition that maximizes modularity. Modularity optimization became the basis for many subsequent algorithms, including the Louvain method [
4], which remains widely used due to its scalability and performance on large networks. More recently, Quiring and Vassilevski presented a modularity-based community detection algorithm that can be run in parallel [
5].
1.2. Spectral Clustering and Graph Partitioning
Spectral methods apply linear algebraic techniques to graph Laplacians to find clusters. These techniques are particularly powerful because they can detect community structures even when they are not apparent from the topology of the graph.
In [
6], Shi and Malik used graph Laplacians and their eigenvectors for clustering in the context of image segmentation. In 2001, Ng, Jordan, and Weiss proposed a normalized spectral clustering algorithm that became foundational in machine learning [
7]. In
Section 2 we will review a spectral method based on [
8] for optimizing generalized modularity.
1.3. Stochastic Methods in Community Detection
Stochastic approaches have become foundational in modeling and analyzing community structures within complex networks. These methods leverage probabilistic frameworks to capture the inherent randomness and uncertainty in real-world networks.
Stochastic block models are generative models for networks with a community structure. They provide a probabilistic framework in which each node belongs to a latent group, and the probability of an edge existing depends on the groups of the two nodes. Recent research includes Karrer and Newman’s work on degree-corrected stochastic block models, which handle degree heterogeneity in real-world graphs [
9], and various works on exact recovery thresholds (see Abbe’s survey [
10]), in which communities can be recovered perfectly as the size of the graph grows, especially under sparse regimes.
Further advancements include Bayesian formulations of stochastic block models, for example, Peng and Carvalho proposed a Bayesian degree-corrected stochastic block model using a logistic regression framework with node-specific correction terms [
11].
1.4. Deep Learning and Graph Neural Networks
Recent trends have seen the incorporation of neural models for community detection. Graph autoencoders and variational autoencoders support learning embeddings that reveal latent community structure [
12]. Graph convolutional networks have been used to classify nodes and indirectly infer communities [
13]. Deep clustering combines embedding and clustering into a unified training objective [
14,
15]. Although powerful, these methods often require labeled data or heuristics and are less interpretable than classical methods.
1.5. Evaluation and Benchmarking
Evaluating community detection algorithms is nontrivial. Benchmarks like the Lancichinetti–Fortunato–Radicchi model [
16] generate synthetic graphs with planted community structures and tunable parameters like overlapping nodes and community sizes. Real-world networks such as Zachary’s Karate Club are used extensively. Commonly used metrics include modularity, normalized mutual information, and the adjusted Rand index. Our focus here is on the modularity metric.
1.6. Applications in Real-World Networks
Community detection may be applied to a wide variety of networks, including social, biological, and information networks. For example, in scientific research networks, such as coauthorship or citation networks, community detection can uncover disciplinary boundaries, collaboration patterns, and the evolution of research fields; see the excellent survey [
17] by Fortunato for more details. Although traditional community detection methods often rely on modularity maximization or spectral partitioning, motif-based methods such as those used in MEGA [
18] highlight the potential of triangle-based clustering to capture tightly coordinated behaviors in social networks.
1.7. Generalized Modularity and Multiscale Community Detection
Modularity optimization remains a prevalent method for community detection, aiming to partition networks such that the density of edges within communities exceeds that expected in a null model. However, traditional modularity has limitations, notably the resolution limit, which hampers the detection of smaller communities. In fact, shortly after modularity was introduced, other methods of community detection arose using a modified version of Newman–Girvan modularity, see [
19,
20,
21]. Such works led Fasino and Tudisco introduced a notion of generalized modularity in 2016 [
22], which contained the aforementioned notions of modularity as special cases.
Generalized modularity frameworks have been developed to address the resolution limit. Lambiotte et al. [
23] introduced a Markov stability approach which employs continuous-time Markov processes to evaluate the persistence of communities over multiple scales. This method supports community detection at varying resolutions. These generalized approaches often integrate stochastic dynamics, offering a dynamic perspective on community structure that complements static modularity measures.
1.8. Outline of the Paper
We continue the analysis of a community detection algorithm, based on modularity, developed in [
24] which sought to explore a generalized modularity algorithm based on the algorithm given in [
5]. It was noted in [
24] that the use of generalized modularity allowed one to introduce an element of randomness to the algorithm. This stochastic version was shown to be capable of outperforming standard modularity algorithms. Here, we take a closer look at stochastic variants of the algorithm given in [
24].
In
Section 2, we provide a brief summary of the basic concepts needed to understand modularity-based community detection.
Section 3 outlines the algorithm developed in [
24] and our methodology to compare stochastic modularity algorithms to standard modularity. Our results indicate that generalized modularity algorithms can be tailored to the dataset at hand. These results are presented in
Section 4. We conclude with a summary and possible avenues for future research in
Section 5.
2. Community Detection Through Generalized Modularity
2.1. Generalized Modularity Matrices
Although there are many ways to perform community detection on a given network, it was Newman and Girvan [
2] who introduced the notion of modularity as a means of community detection. Community detection in this way is performed by maximizing the modularity
Q of the given network where
with
m the number of edges,
A the adjacency matrix, and
the community of vertex
u. The matrix
is called the modularity matrix of the network. In principle, community detection can be performed by finding the largest eigenvalue of the modularity matrix.
In 2018, Fasino and Tudisco generalized the notion of modularity [
22]. This notion of generalized modularity includes as special cases the approaches given by Reichardt and Bornholdt [
20], Ronhovde and Nussinov [
21], and Arenas, Fernández, and Gómez [
19].
Definition 1 ([
22])
. A generalized modularity matrix is a matrix , where A is the adjacency matrix of an undirected, connected (and possibly weighted) graph, W a diagonal matrix, a nonzero vector with nonnegative entries, and σ a positive real number. Definition 2. If is a generalized modularity matrix and P is a partition of the vertices of the associated graph , then the associated modularity measure iswhere , , δ is the Kronecker delta, and and are the communities of u and v, respectively. It will be useful to note an equivalent formula for
Q:
2.2. Spectral Interpretation
We conclude this section by showing that maximizing generalized modularity (in the case of two communities) corresponds to the largest eigenvalue of the generalized modularity matrix. We follow the argument presented in [
24], which in turn is based on that in [
8] for plain modularity matrices.
Let
be a generalized modularity matrix, and
i a subset of the vertex set. The corresponding modularity measure is therefore
Let
and
denote the two communities of the network, and define
Observe that
since
We want the vector that maximizes Q subject to the constraint . Any satisfying this constraint also satisfies . So, we instead consider the relaxed constraint where may point in any direction.
Proceeding with Lagrange multipliers, we have
Thus, maximization of
Q (in the relaxed setting) corresponds to the largest eigenvalue
of
and is given by
The approximate solution (with the constraint
) is
where
is an eigenvector associated to
.
3. Algorithm and Methodology
3.1. Generalized Modularity Framework
We use the algorithm devised in [
24], which is essentially the algorithm presented in [
5] only modified for generalized modularity matrices. For the reader’s convenience, we include an outline of the pertinent results from [
24] here.
Recall that we have a generalized modularity
associated to some network
G with vertex set
V and edge set
E, where
A is an adjacency matrix,
W a diagonal matrix,
a nonzero vector with nonnegative entries, and
a positive number. Each partition of nodes for a given network corresponds to a declaration of communities within that network. Thus, each element of a given partition determines a community within the network. Given elements
i and
j of some partition
P of vertices, denote the set union of
i and
j by
. The set
can be thought to represent the community obtained by merging communities
i and
j. Denote the change in modularity
Q from merging elements
i and
j by
. Finally, recall that for communities
i and
j, we define
Lemma 1 ([
24])
. For all , Proof. Since
consists of all vertices in
i or
j,
□
Lemma 2 ([
24])
. For all , . Proof. Let
P be the original partition and
be the partition after merging communities
i and
j. Applying Lemma 1, we find that
□
Lemma 3 ([
24])
. Let be the matrix whose entry is where . If i and are merged, then 3.2. Hierarchical Agglomeration Algorithm
Theorem 1 ([
24])
. Let be a partition and a partition that is formed by merging elements of . For , let be the vector whose vth entry is 1 if and 0 otherwise. If P is the matrix whose ith column is , then Proof. Given
and
, apply Lemmas 2 and 3 to find
showing that
is the
-entry of
. □
The above theorem underlies the following algorithm:
Start with the adjacency matrix A of the network and the degree vector .
Calculate the pointer vector whose ith entry indicates which adjacent node the ith vertex would most like to merge with. This is the node for which is the largest positive value (ties can be broken by choosing the lowest such index j). If is nonpositive for all nodes j, then set . A very nice feature of this step is that it may be computed in parallel.
Use the pointer to merge nodes i and j that point to each other. Unpaired nodes remain as singletons.
Construct the coarsening matrix R that has a row for each (possibly merged) node and the same number of columns as A. The jth entry in row i is 1 if node j was merged with node i and 0 otherwise.
Calculate the coarsened adjacency matrix and the coarsened degree vector .
The algorithm terminates if contains no positive values. Otherwise, the algorithm recurs with and in place of A and .
The above algorithm is slightly modified to use a randomized value of at each step. This modified algorithm is then used in two experiments aimed at comparing the performance of the stochastic generalized modularity algorithm to the performance of a standard modularity algorithm. Performance can then be compared by calculating the standard modularity measure for each partition. A higher measure indicates a better performance.
3.3. Experimental Setup and Evaluation Criteria
The first experiment runs a Reichardt–Bornholdt (RB) version of the stochastic algorithm 100 times and compares the mean modularity score to the modularity score obtained using the standard Newman–Girvan modularity algorithm. This is carried out over a variety of datasets. The Reichardt–Bornholdt modularity matrix
of a network is given by
where
A is the adjacency matrix,
the number of edges,
the degree vector, and
a positive number.
The second experiment runs the same Reichardt–Bornholdt algorithm and an Arenas–Fernández–Gómez (AFG) version of the stochastic algorithm 100 times each and compares means. This is also carried out over a variety of datasets. The Arenas–Fernández–Gómez modularity matrix
is given by
where
n is the number of vertices of the graph and
is a real number.
We pause to address an important point about how the resulting partitions are compared. Given some network G, running an RB modularity algorithm will return some partition, say , which is found by optimizing the associated RB modularity measure. Similarly, running an AFG modularity algorithm will return partition , which is found by optimizing the associated AFG modularity measure. Finally, we compare the partitions and by calculating their corresponding Newman–Girvan modularity scores.
In our experiments, we focus on two representative generalized modularity frameworks: the Reichardt–Bornholdt (RB) and Arenas–Fernández–Gómez (AFG) models. Although arguably contrived, these models were selected because of their prominence in the literature and their tunable resolution parameters, which allow for a direct investigation of how randomized parameter selection affects performance within the generalized modularity framework.
3.4. Stability Under the Stochastic Parameter
To assess the sensitivity of the stochastic generalized modularity approach to the parameter , we conducted a stability analysis on the datasets ENZYMES-g479 (small biological network), G11 (medium-sized synthetic network), and fe-4elt2 (large sparse graph), using RB modularity. For each dataset, we performed 100 runs with uniformly distributed over the intervals , , and .
We recorded the (standard) modularity score, number of communities, normalized variation of information (NVI) between partitions, and runtime. NVI returns 0 if the two partitions are identical and 1 if completely different, see [
25] for more details. These metrics allow us to evaluate both the performance and consistency of the algorithm under varying parameter spaces. The results of this analysis are presented in
Section 4.
4. Results
4.1. Performance on Benchmark Datasets
Looking at
Table 1, generalized modularity outperforms or ties standard modularity on every dataset tested except econwm2. More importantly, generalized modularity has the ability to outperform standard modularity on average for certain datasets, take for example, ca-netscience. The economics dataset econwm2 is an interesting case: RB modularity did not outperform standard modularity a single time, unlike AFG modularity. This supports the idea of tailoring a generalized modularity algorithm to the dataset at hand.
In
Figure 1, the histograms for stochastic RB modularity are all skewed, with canetscience, GD00c, and G11 negatively skewed and Enzymes8 positively skewed. In
Figure 2, the histograms for the same datasets but with stochastic AFG modularity appear to follow the normal distribution more closely, each showing some improvement in symmetry. Although it is clear that canetscience is still negatively skewed.
Notably, the standard deviations reported in
Table 1 are small for both methods, indicating stability, but RB tends to have slightly higher variability, consistent with the stronger skew observed in its distributions. AFG modularity results are not only more symmetric but also show tighter ranges, suggesting more consistent outputs.
Execution times reported in
Table 1 reveal that both methods scale with dataset size. However, AFG generally completes slightly faster than RB on large datasets, as seen in GD00-c and fe-4elt2, which could be a practical advantage in large-scale applications.
4.2. Distributional Insights and Output Variability
A two-sample t-test was performed on the stochastic RB and AFG modularity scores, with a null hypothesis of equal means. This null hypothesis is rejected for ca-netscience and GD00-c but not for Enzymes8 nor G11. This result again supports the idea that one generalized modularity matrix may be better suited to a given dataset than others. For example, RB modularity outperforms AFG modularity on ENZYMES-g479 but not on ENZYMES8. Interestingly, this occurs despite the more symmetric and consistent distributions of AFG, showing that symmetry does not always translate to superior performance.
In addition to outperforming or matching standard modularity in most cases, the stochastic framework allows us to observe variability in modularity outcomes across runs. The distributional statistics and histograms reported in
Table 1 and
Figure 1 and
Figure 2, respectively, illustrate that different generalized modularity formulations (e.g., RB vs. AFG) produce distinct variability profiles across datasets. This suggests that the stochastic method does more than help escape local optima—it provides a mechanism for comparing the consistency of results under different models. Although our current analysis focuses on modularity values, future work may investigate the structure of the resulting ensemble of partitions, offering further insight into the stability and robustness of community assignments.
The performance differences observed across datasets can likely be attributed to interactions between the network structure and the modularity formulation. For example, the poor performance of RB on econwm2 may be due to its high edge density and complex topology, which could misalign with the resolution scale favored by RB’s default parameter range. Conversely, datasets such as ca-netscience and Enzymes-g479 show improved performance under stochastic RB, suggesting that RB’s formulation is well-suited to sparser, modular graphs when variance is introduced through . The AFG formulation tends to yield more stable outputs, possibly due to its inherent weighting scheme, which smooths variability across iterations. A more comprehensive structural characterization of each dataset, such as community strength, assortativity, or degree heterogeneity, would help ground these interpretations.
4.3. Observations from Parameter Sensitivity Analysis
The stability analysis reveals a consistent trend: performance and stability of the stochastic algorithm vary significantly depending on the distribution of and network characteristics. For ease of exposition, we will denote the mean, variance, minimum, and maximum of the normalized variation of information by an ordered quadruplet.
For the dataset ENZYMES-g479:
For the interval , the summary statistics were and the total runtime was about 33 s. The algorithm suffers from moderate instability in this case. Some runs produce identical partitions while others produce completely different partitions. Furthermore, some runs put all nodes into a single community while others made every node its own community. The parameter range in this case is too low to meaningfully resolve stable communities in this network.
For the interval , the summary statistics were and the total runtime was about 22 s. Although the NVI is slightly higher, the variance and range are much tighter. Thus, we have more consistent partitions despite the slightly higher average dissimilarity. In fact, 64 of the 100 runs produced partitions with 4 communities with low variance in modularity, further supporting that this case yields high-quality and stable partitions.
For the interval , the summary statistics were and the total runtime was about 21 s. Here we have a lower mean and variance in NVI, hinting at more consistent partitioning than the first case. However, the average modularity found here was , much lower than the modularity found in the previous case (). The average community number was 17. Thus, even though the partitions are more stable, they are consistently over-partitioned and of lower quality in terms of modularity.
For the dataset G11:
For the interval , the summary statistics were and the total runtime was about 4.5 min. In this case, the low mean NVI suggests that partitions are fairly similar on average across runs. However, the high variance and wide max indicate that some runs produce very different partitions. In fact, every run produced a single community except for one run, which produced 36 communities. This suggests instability and poor resolution at low . Most runs will collapse the network into a single community, while an occasional run will over-partition it.
For the interval , the summary statistics were and the total runtime was about 2.7 min. Though the NVI is higher compared to the previous case, here it is tightly concentrated with low variance. In this case the stochastic algorithm consistently finds moderately distinct but structurally similar partitions. The modularity itself is very high and stable with a mean of and variance of about . Furthermore, the number of communities is narrowly ranged (mean of 10.85, min of 8, max of 14). Here, the algorithm achieves both quality and consistency, making it the best-performing configuration for G11.
For the interval , the summary statistics were and the total runtime was about 1.5 min. The NVI is higher than both previous cases. Even though the variance is low, the entire distribution is shifted upward, indicating consistently more dissimilar partitions. The number of communities is also quite high, with a mean of 49, suggesting over-partitioning. The average modularity remains high but is noticeably lower than the previous case. In this case, the partitions are more fragmented and experience slightly degraded modularity.
For the dataset fe-4elt2:
For the interval , the summary statistics were and the total runtime was about 38 h. The terrible runtime is bad enough, but worse yet, the results are not worth the wait. Again, the low mean NVI is misleading due to almost all runs yielding the single community partition. The modularity is extremely low as a result, confirming that the output is trivial and structurally uninformative for this parameter range in the case of large, sparse graphs like fe-4elt2.
For the interval , the summary statistics were and the total runtime was about 21 min. The NVI values are moderate but tightly clustered, suggesting the algorithm consistently finds distinct but structurally similar partitions. The high average modularity (), together with the low variance in modularity () and a reasonable community count distribution ( on average with a range of indicates high-quality and stable outputs. This interval produces both meaningful and robust results, making it the most effective for fe-4elt2.
For the interval , the summary statistics were and the total runtime was about 10 min. The NVI is slightly higher than the previous case, but still very tightly concentrated. In this case, though, the algorithm over-partitions the network (112 communities on average), with modularity remaining fairly high (,) though lower than the previous case. Results are consistent but overly granular. This could indicate overfitting structure, or perhaps the detection of substructures within the optimal communities.
Refer to
Figure 3 to see histograms of the NVI in each of the above cases. These results indicate that parameter choice should be tuned to the network structure. Lower ranges for
may lead to under-partitioning in large or dense graphs, while moderate ranges can yield stable, high-quality results. This supports the inclusion of adaptive or data-driven range selection for
in future implementations.
5. Future Work and Conclusions
Our study demonstrates that stochastic generalized modularity algorithms can outperform traditional modularity-based methods on certain datasets. By introducing randomness into the resolution parameter at each step of a hierarchical coarsening process, we observe measurable gains in modularity scores and expose variation in outcome stability across model types. These findings suggest that generalized modularity methods can be flexibly adapted to the structure of a given network, rather than applying a single global approach.
Several avenues remain open for expanding the theoretical and empirical scope of this work:
First, a rigorous analysis of the asymptotic behavior of the stochastic generalized modularity algorithm is needed. Understanding how convergence rates, complexity, and partition quality scale with network size could offer important guarantees and practical design guidelines.
Second, it is important to analyze the behavior of the algorithm on more structurally diverse networks. This includes networks with highly overlapping communities, heterogeneous degree distributions, and hierarchical or multilayer structures. In particular, exploring the adaptability of the stochastic method to dynamic networks—where community structure evolves over time—and very large-scale graphs would extend the method’s practical relevance.
Third, while we demonstrate variability in modularity outcomes across stochastic runs, further investigation is needed into the ensemble of resulting partitions. Metrics such as pairwise partition similarity, co-association frequency, and consensus clustering could be used to assess robustness and extract stable community cores.
Finally, expanding the theoretical foundation of the method, including a more principled treatment of how parameter randomization influences the optimization landscape, would clarify under what conditions the stochastic method is likely to outperform deterministic alternatives.
In conclusion, this work represents an initial step towards a more flexible and data-sensitive approach to modularity-based community detection. We anticipate that further refinement and theoretical grounding will enhance both the performance and interpretability of the stochastic framework.
Author Contributions
Conceptualization, J.T.; methodology, J.T. and J.L.; formal analysis, J.T. and J.L.; writing—original draft, J.T. and J.L.; Writing—review and editing, J.T. and J.L.; supervision, J.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research and the APC was funded by Norfolk State University’s Provost Tenure-Readiness Program.
Data Availability Statement
The network data presented in this study are openly available in the Network Data Repository at doi/10.5555/2888116.2888372. All other raw data supporting the conclusions of this article will be made available by the authors on request.
Acknowledgments
The authors would like to thank the referees for the thoughtful feedback which greatly improved the quality and scope of this work.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Girvan, M.; Newman, M.E.J. Community structure in social and biological networks. Proc. Natl. Acad. Sci. USA 2002, 99, 7821–7826. [Google Scholar] [CrossRef] [PubMed]
- Newman, M.E.J.; Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 2004, 69, 026113. [Google Scholar] [CrossRef] [PubMed]
- Clauset, A.; Newman, M.E.J.; Moore, C. Finding community structure in very large networks. Phys. Rev. E 2004, 70, 066111. [Google Scholar] [CrossRef] [PubMed]
- Blondel, V.D.; Guillaume, J.-L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, 2008, P10008. [Google Scholar] [CrossRef]
- Quiring, B.G.; Vassilevski, P.S. Properties of the Graph Modularity Matrix and its Applications; Technical Report; Lawrence Livermore National Lab: Livermore, CA, USA, 2019. [Google Scholar]
- Shi, J.; Malik, J. Normalized cuts and image segmentation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Hilton Head, SC, USA, 13–15 June 2000; IEEE: Piscataway, NJ, USA, 2000; Volume CVPR 2000, pp. 731–737. [Google Scholar]
- Ng, A.Y.; Jordan, M.I.; Weiss, Y. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2001; Volume 14. [Google Scholar]
- Newman, M.E.J. Networks: An Introduction; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
- Karrer, B.; Newman, M.E.J. Stochastic blockmodels and community structure in networks. Phys. Rev. E 2011, 83, 016107. [Google Scholar] [CrossRef] [PubMed]
- Abbe, E. Community detection and stochastic block models: Recent developments. J. Mach. Learn. Res. 2017, 18, 1–86. [Google Scholar]
- Peng, L.; Carvalho, L. Bayesian degree-corrected stochastic blockmodels for community detection. Electron. J. Statist. 2016, 10, 2746–2779. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Variational graph auto-encoders. arXiv 2016, arXiv:1611.07308. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Pan, S.; Hu, R.; Long, G.; Jiang, J.; Yao, L.; Zhang, C. Adversarially Regularized Graph Autoencoder for Graph Embedding. arXiv 2018, arXiv:1802.04407. [Google Scholar]
- Xie, J.; Girshick, R.; Farhadi, A. Unsupervised deep embedding for clustering analysis. In Proceedings of the International Conference on Machine Learning, New York City, NY, USA, 19–24 June 2016; pp. 478–487. [Google Scholar]
- Lancichinetti, A.; Fortunato, S.; Radicchi, F. Benchmark graphs for testing community detection algorithms. Phys. Rev. E 2008, 78, 046110. [Google Scholar] [CrossRef] [PubMed]
- Fortunato, S. Community detection in graphs. Phys. Rep. 2010, 486, 75–174. [Google Scholar] [CrossRef]
- Hang, C.N.; Yu, P.-D.; Chen, S.; Tan, C.W.; Chen, G. MEGA: Machine learning-enhanced graph analytics for infodemic risk management. IEEE J. Biomed. Health Inform. 2023, 27, 6100–6111. [Google Scholar] [CrossRef] [PubMed]
- Arenas, A.; Fernández, A.; Gómez, S. Analysis of the structure of complex networks at different resolution levels. New J. Phys. 2008, 10, 053039. [Google Scholar] [CrossRef]
- Reichardt, J.; Bornholdt, S. Statistical mechanics of community detection. Phys. Rev. E 2006, 74, 016110. [Google Scholar] [CrossRef] [PubMed]
- Ronhovde, P.; Nussinov, Z. Local resolution-limit-free Potts model for community detection. Phys. Rev. E 2010, 81, 046114. [Google Scholar] [CrossRef] [PubMed]
- Fasino, D.; Tudisco, F. Generalized modularity matrices. Linear Algebra Its Appl. 2016, 502, 327–345. [Google Scholar] [CrossRef]
- Lambiotte, R.; Delvenne, J.-C.; Barahona, M. Random walks, Markov processes and the multiscale modular organization of complex networks. IEEE Trans. Netw. Sci. Eng. 2014, 1, 76–90. [Google Scholar] [CrossRef]
- Tipton, J.E.; Dillon, B.S. Properties of Generalized Modularity Matrices. Technical Report. 2023; unpublished work. [Google Scholar]
- Meilă, M. Comparing clusterings—an information based distance. J. Multivar. Anal. 2007, 98, 873–895. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).