Uncertainty in GNN Learning Evaluations: A Comparison between Measures for Quantifying Randomness in GNN Community Detection

(1) The enhanced capability of graph neural networks (GNNs) in unsupervised community detection of clustered nodes is attributed to their capacity to encode both the connectivity and feature information spaces of graphs. The identification of latent communities holds practical significance in various domains, from social networks to genomics. Current real-world performance benchmarks are perplexing due to the multitude of decisions influencing GNN evaluations for this task. (2) Three metrics are compared to assess the consistency of algorithm rankings in the presence of randomness. The consistency and quality of performance between the results under a hyperparameter optimisation with the default hyperparameters is evaluated. (3) The results compare hyperparameter optimisation with default hyperparameters, revealing a significant performance loss when neglecting hyperparameter investigation. A comparison of metrics indicates that ties in ranks can substantially alter the quantification of randomness. (4) Ensuring adherence to the same evaluation criteria may result in notable differences in the reported performance of methods for this task. The W randomness coefficient, based on the Wasserstein distance, is identified as providing the most robust assessment of randomness.


Introduction
Graph Neural Networks (GNNs) have gained popularity as a neural network-based approach for handling graph-structured data, leveraging their capacity to merge two information sources through the propagation and aggregation of node feature encodings along the network's connectivity [17].Nodes within a network can be organized into communities based on similarities in associated features and/or edge density [33].Analyzing the network structure to identify clusters or communities of nodes proves valuable in addressing real-world issues like misinformation detection [25], genomic feature discovery [4], and social network or research recommendation [49].We consider unsupervised neural approaches to community detection that do not use any ground truth or labels during training to optimise the loss functions.As an unsupervised task, the identification of node clusters relies on latent patterns within the dataset rather than on "ground-truth" labels.Clustering holds significance for emerging applications lacking associated ground truth.Evaluating performance in discovering unknown information becomes crucial for applications where label access is restricted.Many graph applications involve millions of nodes, and datasets mimicking realistic scenarios exhibit low labeling rates [14].
Recently, Leeney and McConville [19] , proposed a framework for fairly evaluating GNN community detection algorithms and the importance of a hyperparameter optimisation procedure to performance comparisons.However, this is not done consistently across the field although benchmarks are widely considered as important.Confusion arising from biased benchmarks and evaluation procedures distorts understanding of the research field.Currently, resources, money, time and energy training models are wasted on inconclusive results which may have developed understanding in the field but go unpublished.Where research findings inform policy decisions or medical practices, publication bias can lead to decisions based on incomplete or biased evidence, in turn creating inefficiencies and downstream harms.To accurately reflect the real-world capabilities of research, one may use a common framework for evaluating proposed methods.To demonstrate the need for such a framework in this setting, we measure the difference between using the default parameters given by the original implementations to those optimised for under this framework.To quantify the influence of randomness on results we compare three metrics, two proposed herein, for evaluating consistency of algorithm rankings over different random seeds which quantifies the robustness of results.Here, we also investigate how the quantification of randomness is affected when ties in performance between two or more algorithms is accounted for.Ties are can occur when out of memory errors occur or when a metric such as conductance is optimised.In addition, we investigate how these metrics change as the number of tests increase, showing how sensitive each is to breathe of investigations.

Related Work
There is a recognition of the necessity for rigor in frameworks assessing machine learning algorithms.[29].Several frameworks for assessing the performance of supervised GNNs performing node classification, link prediction and graph classification exist [7,26,8,28].Our focus lies in unsupervised community detection, a task that proves more challenging to train and evaluate.While existing reviews on community detection offer insights, they generally lack thorough evaluations.The absence of a standardized evaluation in this task has been acknowledged in discussions concerning non-neural methods [20,15], but this consideration does not currently extend to GNNs.
Su et al. [36] provides a taxonomy of existing node clustering methods, comparing representative works in the GNN space, as well as deep non-negative matrix factorisation and sparse filtering-based methods but do not empirically evaluate the methods discussed.Chunaev [5] provides an overview of community detection methods but omit mention of GNN methods.Tu et al. [41] provides an overview of network representation learning for community detection and proposes a novel method to incorporate node connectivity and features, but does not compare GNN methods in the study.Ezugwu et al. [9] surveys clustering algorithms providing an in-depth discussion of all applications of clustering algorithms, but does not includes an investigation into methods that perform clustering on graphs.There are many traditional community detection methods that are not based on neural networks.Louvain [3] is a classic community detection algorithm that uses two phases: in the modularity optimisation phase, nodes are randomly ordered then added and removed from communities until there is no significant change in modularity; nodes with the same community are collected and represented as a single node in community aggregation.Leiden [39] is an extension of Louvain, designed to fix the tendency to discover weakly connected commu-nities.Leiden has some subtle differences which include the addition of a refinement of partitions phase in-between the modularity optimisation and community aggregation phases of Louvain.Rather than the greedy merging of Louvain based on the largest increase of modularity, the chance of merging increases in proportion to the modularity.Also, Leiden uses a 'fast local move procedure', whereby it only visits the nodes who's neighbourhoods have been changed in the last iteration of the modularity optimisation.Label Propagation [31] uses network structure to find communities of densely connected nodes.This works by initialising every node with a unique label and at every iteration, each node is reassigned the label that the majority of it's neighbours have.These traditional methods are transductive [43] as they do not learn a function that can be applied to unseen data, whereas in this work we only consider inductive GNN methods which produce a model that can can predict community assignments for data that it has not been trained on.
Various frameworks exist for evaluating performance, and the evaluation procedure employed significantly influences the performance of all algorithms [53].Under consistent conditions, it has been demonstrated that simple models can exhibit improved performance with a thorough exploration of the hyperparameter space [34].This improvement may be attributed to the impact of random initialisations on performance [8].Importantly, relying on results from papers without conducting the same hyperparameter optimization across all models introduces inconsistency and yields a misleading benchmark.The biased selection of random seeds, which can skew performance, is considered unfair.Furthermore, not training over the same number of epochs or neglecting model selection based on validation set results leads to unfair comparisons, potentially resulting in inaccurate conclusions about the effectiveness of models.Previous work in this space by Leeney and McConville [19] proposed a framework for consistent community detection with GNNs and quantifying the randomness in these investigations.In this work, we expand upon this metric of randomness by considering the affect of ties in performance.We show the affect of ties on ranking randomness and propose two new metrics that improve upon the previous work, evaluating how sensitive each is to the scale of the investigation.

Methodology
This section details the procedure for evaluation; the problem that is aimed to solve; the hyperparameter optimisation and the resources allocated to this investigation; the algorithms that are being tested; the metrics of performance and datasets used.
The current method of evaluating algorithms suffer from various shortcomings that impede the fairness and reliability of model comparisons.Establishing a consistent framework provides a transparent and objective basis for comparing models.A standardised benchmark practice contributes to transparency by thoroughly documenting the factors influencing performance, encouraging researchers to engage in fair comparisons.To leverage results from previous research, it is essential to follow the exact evaluation procedure, saving time and effort for practitioners.Establishing consistent practices is crucial, as there is currently no reason for confidence in performance claims without a trustworthy evaluation, fostering a deeper understanding and facilitating progress in the field.For this reason, we follow the procedure established by Leeney and McConville [19].
None of the evaluated algorithms are deterministic, as each relies on randomness for initializing the network.Thus, the consistency of a framework can be assessed by considering the amount to which performance rankings change when different randomness is introduced across all tests within the framework.Tests refer to the metric's performance on a specific dataset, and ranking indicates the algorithm's placement relative to others.In this context, different randomness means each distinct random seed used evaluating the algorithms.To evaluate the consistency of results obtained by this framework, we compare the existing coefficient of randomness with two new metrics.We investigate the affect of performance ties on these metrics.In the existing metrics, ties are dealt with by awarding each algorithm the lowest rank between those that share a rank.In this work, this is compared with the scenario of awarding the mean rank of those that are tied.Ties are likely to occur under certain metrics such as conductance, where the algorithm scores the optimal value of 0. In addition, ties will occur where algorithms run out of memory in computation.With this setup, the improved version of Kendall's W coefficient of concordance [10] that can account for ties is used to assess the consistency of rankings.In addition, we use the Wasserstein distance to create another metric for quantifying the difference in rankings due to random seeds.From Leeney and McConville [19], the W randomness coefficient is calculated using the number of algorithms a, and random seeds n, along with tests of performance that create rankings of algorithms, and defined by Equation 1: . ( The sum of squared deviations S is calculated for all algorithms, and calculated using the deviation from mean rank due for each random seed.This is averaged over all metrics and datasets that make up the suite of tests T. Using the one minus means that if the W is high then randomness has affected the rankings, whereas a consistent ranking procedure results in a lower number.By detailing the consistency of a framework across the randomness evaluated, the robustness of the framework can be maintained, allowing researchers to trust results and maintain credibility of their publications.However, this metric does not account for ties in performance and deals with this by assigning the lowest rank from all the algorithms that tie.Instead, an improvement to this is where there are ties, each are given the average of the ranks that would have been given had no ties occurred.Where there are a large number of ties this reduce the value of W and allows us to compute the correction equation as, where R i is the sum of the ranks for algorithm i, g j is the number of groups of ties in the rankings under seed j and t i is the number of ties in that group. The other metric that we propose is to calculate the overlap in ranking distributions.This is normalised by the maximum difference that would occur when there is no uncertainty in rank due to randomness as there would be no overlap between the ranking distributions.Formally, we calculate the normalised Wasserstein distance [42,35] between rank distributions over a each of the test scenarios.Therefore, given the probability distribution of the rank of an algorithm j as f j (r) over the discrete ranking space R then the cumulative distribution is denoted as F j (r) and therefore the Wasserstein distance between two rank distributions is given by Equation 3: This leads us to the definition of the W w Wasserstein randomness given by Equation 4, We also need to compare the effectiveness of these coefficients in assessing the randomness present in the investigations.To do this, we sample a number of tests from the 44 different ten times for each number of potential tests.This allows us to see how the different coefficients converge to the true coefficient value found by computing the metric over all tests.
To asses the quality of the results, we compare whether the performance is better under the hyperparameters or default reported by original implementations.The different parameter sets are given a rank by comparing the performance on every test.This is then averaged across every test, to give the Framework Comparison Rank (FCR) [19].Demonstrating that failing to optimize hyperparameters properly can result in sub-optimal performance means that models that could have performed better with proper tuning may appear inferior.This affects decision-making and potentially leading to the adoption of sub-optimal solutions.In the real world, this can have costly and damaging consequences, and is especially critical in domains where model predictions impact decisions, such as healthcare, finance, and autonomous vehicles.

Problem Definition
The problem definition of community detection on attributed graphs is defined as follows.The graph, where N is the number of nodes in the graph, is represented as G = (A, X), with the relational information of nodes modelled by the adjacency matrix A ∈ R N×N .Given a set of nodes V and a set of edges E, let e i,j = (v i , v j ) ∈ E denote the edge that points from v j to v i .The graph is considered weighted so, the adjacency matrix 0 < A i,j 1 if e i,j ∈ E and A i,j = 0 if e i,j / ∈ E. Also given is a set of node features X ∈ R N×d , where d represents the number of different node attributes (or feature dimensions).The objective is to partition the graph G into k clusters such that nodes in each partition, or cluster, generally have similar structure and feature values.The only information typically given to the algorithms at training time is the number of clusters k to partition the graph into.Hard clustering is assumed, where each community detection algorithm must assign each node a single community to which it belongs, such that P ∈ R N and we evaluate the clusters associated with each node using the labels given with each dataset such that L ∈ R N .Metrics that compare to the ground truth labels are allowed to be used for early-stopping and for hyperparameter optimisation.

Models
We consider a representative suite of GNNs, selected based on factors such as code availability and reimplementation time.In addition to explicit community detection algorithms, we also consider those that can learn an unsupervised representation of data as there is previous research that applies vectorbased clustering algorithms to the representations [24].The following are GNN architectures that learn representations of attributed graphs without a comparison to any labels during the training process.All these methods have hyperparameters which control trade-offs in optimisation.
Deep Attentional Embedded Graph Clustering (DAEGC) uses a k-means target to self-supervise the clustering module to iteratively refine the clustering of node embeddings [45].Deep Modularity Networks (DMoN) uses GCNs to maximises a modularity based clustering objective to optimise cluster assignments by a spectral relaxation of the problem [40].Neighborhood Contrast Framework for Attributed Graph Clustering (CAGC) [46] is designed for attributed graph clustering with contrastive self-expression loss that selects positive/negative pairs from the data and contrasts representations with its k-nearest neighbours.Deep Graph Infomax (DGI) maximises mutual information between patch representations of subgraphs and the corresponding high-level summaries [44].GRAph Contrastive rEpresentation learning (GRACE) generates a corrupted view of the graph by removing edges and learns node representations by maximising agreement across two views [52].Contrastive Multi-View Representation Learning on Graphs (MVGRL) argues that the best employment of contrastive methods for graphs is achieved by contrasting encodings' from first-order neighbours and a general graph diffusion [12].Bootstrapped Graph Latents (BGRL) [38] uses a self-supervised bootstrap procedure by maintaining two graph encoders; the online one learns to predict the representations of the target encoder, which in itself is updated by an exponential moving average of the online encoder.SelfGNN [16] also uses this principal but uses augmentations of the feature space to train the network.Towards Unsupervised Deep Graph Structure Learning (SUBLIME) [21] an encoder with the bootstrapping principle applied over the feature space as well as a contrastive scheme between the nearest neighbours.Variational Graph AutoEncoder Reconstruction (VGAER) [30] reconstructs a modularity distribution using a cross entropy based decoder from the encoding of a VGAE [18].

Hyperparameter Optimisation Procedure
There are sweet spots of architecture combinations that are best for each dataset [1] and the effects of not selecting hyperparameters (HPs) have been well documented.Choosing too wide of a HP interval or including uninformative HPs in the search space can have an adverse effect on tuning outcomes in the given budget [50].Thus, a HPO is performed under feasible constraints in order to validate the hypothesis that HPO affects the comparison of methods.It has been shown that grid search is not suited for searching for HPs on a new dataset and that Bayesian approaches perform better than random [1].There are a variety of Bayesian methods that can be used for hyperparameter selection.One such is the Tree Parzen-Estimator (TPE) [2] that can retain the conditionality of variables [50] and has been shown to be a good estimator given limited resources [51].The multi-objective version of the TPE [27] is used to explore the multiple metrics of performance investigated.Given a limited budget, the TPE is optimal as it allows the efficient exploration of parameters.
In this framework, a modification to the nested cross validation procedure is used to match reasonable computational budgets, which is to optimise hyperparameters on the first seed tested on and use those hyperparameters on the other seeds.Additionally, it it beneficial to establish a common resource allocation such as number of epochs allowed for in training or number of hyperparameter trials.Ensuring 1: Resources are allocated an investigation, those detailed are shared across all investigations.Algorithms that are designed to benefit from a small number of HPs should perform better as they can search more of the space within the given budget.All models are trained with 1x 2080 Ti GPU on a server with 12GB of RAM, and a 16core Xeon CPU.resources are used in all investigations means that the relatively underfunded researchers can participate in the field, democratising the access to contribution.Conversely, this also means that highly funded researchers cannot bias their results by exploiting the resources they have available.

Suite of Tests
A test of an algorithm in this framework is the performance under a metric on a dataset.Algorithms are ranked on each test on every random seed used.For evaluation purposes, some metrics require the ground truth, and others do not, although regardless, this knowledge is not used during the training itself.Macro F1-score (F1) is calculated to ensure that evaluations are not biased by the label distribution, as communities sizes are often imbalanced.Normalised mutual information (NMI) is also used, which is the amount of information that can extract from one distribution with respect to a second.For unsupervised metrics, modularity and conductance are selected.Modularity quantifies the deviation of the clustering from what would be observed in expectation under a random graph.Conductance is the proportion of total edge volume that points outside the cluster.These two metrics are unsupervised as they are calculated using the predicted cluster label and the adjacency matrix, without using any ground truth.Many metrics are used in the framework as they align with specific objectives and ensures that evaluations reflect a clear and understandable assessment of performance.
Generalisation of performance on one dataset can often not be statistically valid and lead to overfitting on a particular benchmark [32], hence multiple are used in this investigation.To fairly compare different GNN architectures, a range of graph topologies are used to fairly represent potential applications.Each dataset can be summarised by commonly used graph statistics: the average clustering coefficient [48] and closeness centrality [47].The former is the proportion of all the connections that exist in a nodes neighbourhood compared to a fully connected neighbourhood, averaged across all nodes.The latter is the reciprocal of the mean shortest path distance from all other nodes in the graph.All datasets are publicly available and have been used previously in GNN research [22].
Using many datasets for evaluation means that dataset bias is mitigated, which means that the framework is better at assessing the generalisation capability of models to different datasets.These datasets are detailed in Table 2, the following is a brief summary: Cora [23], CiteSeer [11], DBLP [37] are graphs of academic publications from various sources with the features coming from words in publications and connectivity from citations.AMAC and AMAP are extracted from the Amazon co-purchase graph [13].Texas, Wisc and Cornell are extracted from web pages from computer science departments of various universities [6].UAT, EAT and BAT contain airport activity data collected from the National Civil Aviation Agency, Statistical Office of the European Union and Bureau of Transportation Statistics [22].[19] and the other metrics of randomness is that ties are settled by taking the mean of the ranks rather than assigning the lowest rank to all algorithms.

Evaluation and Discussion
The Framework Comparison Rank is the average rank when comparing performance of the parameters found through hyperparameter optimisation versus the default values.From Table 3 it can be seen that Framework Comparison Rank indicates that the hyperparameters that are optimised on average perform better.This validates the hypothesis that the hyperparameter optimisation significantly impacts the evaluation of GNN based approaches to community detection.The results of the hyperparameter optimisation and the default parameters are visualised in Figure 1.From this, we can see the difference in performance under both sets of hyperparameters.On some datasets the default parameters work better than those optimised, which means that under the reasonable budget assumed in this framework they are not reproducible.This adds strength to the claim that using the default parameters is not credible.Without validation that these parameters can be recovered, results cannot be trusted and mistakes are harder to identify.On the other side of this, often the hyperparameter optimisation performs better than the default values.Algorithms were published knowing these parameters can be tuned to better performance.Using a reasonable resources, the performance can be increased, which means that without the optimisation procedure, inaccurate or misleading comparisons are propagated.Reproducible findings are the solid foundation that allows us to build on previous work and is a necessity for scientific validity.
The W Randomness Coefficient quantifies the consistency of rankings over the different random seeds tested on, averaged over the suite of tests in the framework.With less deviation of prediction under the presence of randomness, an evaluation finds a more confident assessment of the best algorithm.A marginally higher W value using the optimised hyperparameters indicates that the default parameters are more consistent over randomness.There is little difference in the coefficients for the original W randomness coefficient, which is potentially due to the fact that the default parameters have been evaluated with a consistent approach to model selection and constant resource allocation to training time.However, when we account for the number of tied ranks, we find that the HPO has more tied ranks over the investigation, due to out of memory or optimial conductance values.This may be because the hyperparamter optimisation leads to better results, reducing the difference between algorithms.This is supported by the W randomness coefficients that account for ties, showing that the HPO is less consistent across randomness.By assessing the W Randomness coefficient we can reduce the impact of biased evaluation procedures.However it is important to explain the reasons behind why randomness might be affecting the results, the higher W for the hyperparameter analysis might be because algorithms increase performance under the HPO.Therefore, when all methods are fairly evaluated under a HPO the the spread of rankings may overlap more, which isn't explictly negative.With this coefficient, researchers can quantify how reliable their results are, and therefore the usability in real-world applications.This sets the baseline for consistency in evaluation procedure and allows better understanding of relative method performance.
In Figure 2 we see the convergence of each metric as the number of tests increases.There is significant overlap between the default and HPO investigation.When we use the original W randomness coefficient but change how tied ranks are dealt with, by taking the mean of the ties, we can see that the two distributions overlap less.This demonstrates that how ties are dealt with does change quantification of randomness.Comparing this to the W t randomness coefficient, we can see that this metric does not affect the measurement of randomness at the scale that is carried out herein.However, when comparing to the W w , it is clear that this coefficient has significantly less spread for low number of tests and converges quicker to the true value as measured by the full breadth of the investigation.Therefore, this metric can be used to provide a more accurate assessment of the amount of randomness in the investigation and the robustness of the methods evaluated.
Given different computational resources performance rankings will vary.Future iterations of the framework should experiment with the the number of trials and impact of over-reliance on specific seeds or extending the hyperparameter options to a continuous distribution.Additionally, finding the best general algorithm will have to include a wide range of different topologies or sizes of graphs that are not looked at, neither do we explore other feature space landscapes or class distributions.

Conclusion
In this work we demonstrate flaws with how GNN based community detection methods are currently evaluated, leading to potentially misleading and confusing conclusions.To address this, a framework compared for providing a more consistent and fair evaluation for GNN community detection.We provide further insight that consistent HPO is key at this task by quantifying the difference in performance from HPO to reported values.Finally, different metrics of assessing the consistency of rankings under the presence of randomness are compared.It is found that the W Wasserstein randomness coefficient is the most robust to samples of the full test investigation.

Figure 1 :
Figure1: The average performance and standard deviation of each metric averaged over every seed tested on for all methods on all datasets.The hyperparameter investigation under our framework is shown in colour compared with the default hyperparameters in dashed boxes.The lower the value for Conductance is better.Out of memory occurrences on happened on the AMAC dataset with the following algorithms during HPO: DAEGC, SUBLIME, DGI, CAGC, VGAER and CAGC under the default HPs.

Figure 2 :
Figure2: Here the different metrics of quantifying randomness are compared against samples of the original testing space.The randomness in rankings over samples of experiments using the default hyperparameters is compared against the HPO results.The distinction between the original W Randomness Coefficient proposed byLeeney and McConville [19]  and the other metrics of randomness is that ties are settled by taking the mean of the ranks rather than assigning the lowest rank to all algorithms. Table

Table 2 :
The datasets and associated statistics.

Table 3 :
Here we show the quantification of intra framework consistency using the W Randomness Coefficient, W Randomness with Mean Ties, Tied W Randomness, W Wasserstein Randomness and inter framework disparity using the Framework Comparison Rank.Low values for the Framework Comparison Rank and all W Randomness Coefficient's are preferred.