# Spikyball Sampling: Exploring Large Networks via an Inhomogeneous Filtered Diffusion

^{*}

## Abstract

**:**

## 1. Introduction

- reduces the expansion to a subset of the possible neighbors,
- chooses this subset according to predefined rules.

## 2. Related Work

## 3. Proposed Method

Algorithm 1: Spikyball algorithm. |

`GetNodesInfo`is the one retrieving the information associated to the nodes within each layer ${\mathcal{L}}_{k}$. This information may be required when dealing with an attributed graph, in the case where the sampling is guided by the attributes. While this algorithm concerns the general case of graphs having attributes, it can be used without modifications when no attributes are present. In this case, the

`GetNodesInfo`will always return an empty set. The function

`GetNeighbors`collect the edges incident to the nodes in ${\mathcal{L}}_{k}$ as well as the data attached to these edges if available. Both functions (

`GetNodesInfo`and

`GetNeighbors`) will make $\left|{\mathcal{L}}_{k}\right|$ requests, with $\left|{\mathcal{L}}_{k}\right|$ being the number of nodes in ${\mathcal{L}}_{k}$. In many cases, for graphs without attributes or for some networks allowing to join node and edge requests,

`GetNodesInfo`is not needed and the number of requests will be divided by 2. We assume that the information about the neighbors is not collected, except their node id which is encoded in the edges. Indeed, collecting information from the neighbors would require an additional query for each neighbor, which can be quickly prohibitive and limited in the case of a social network.

`FilterEdges`between edges connecting the source nodes to nodes already collected in previous layers ${E}_{k}^{\left(\mathrm{in}\right)}$ and the edges pointing to new nodes ${E}_{k}^{\left(\mathrm{out}\right)}$. Furthermore, this function can be used to remove edges according to some criteria, for example, if the weight of an edge is smaller than a given value set by the user. The function

`SampleEdges`selects the edges to follow from the set provided by

`FilterEdges`and outputs the nodes that will form the new layer, ready for the next exploration step. The exploration rules are encoded inside this function and are explained in more details in Section 3.1. To perform the sampling,

`SampleEdges`can take into account the data collected from the nodes in ${\mathcal{L}}_{k}$ and the data associated to the edges. The last function,

`AddToGraph`, adds the new nodes and their connections (possibly with their attributes) inside ${G}_{s}$. Eventually, the union of two sets is performed with

`Union`to update the list of sampled nodes ${\mathcal{L}}_{T}$.

#### 3.1. Exploration Rules

`SampleEdges`which takes 2 input arguments. The first one is the list of edges with their properties, ${E}_{k}^{\left(\mathrm{out}\right)}$, that connect nodes from ${\mathcal{L}}_{k}$ to their neighbors not already sampled (not in ${G}_{s}$). The second one is the data associated to the nodes in ${\mathcal{L}}_{k}$ that may influence the selection of the edges. Within the function is performed a selection of edges to explore, among the one in ${E}_{k}^{\left(\mathrm{out}\right)}$. The target nodes of the selected edges will be the elements of ${\mathcal{L}}_{k+1}$, to explore in the next step $k+1$. Several exploration schemes can be defined with different sets of rules and different properties. The key element is a probability mass function ${p}_{k}$ associated to the set of edges ${E}_{k}^{\left(\mathrm{out}\right)}$ that guides the choice of edges to follow from the layer k to the next. This can also be seen as a conditional probability ${p}_{k}\left(j\right|i)$ of choosing j at layer $k+1$ if i has been collected at layer k.

**Spikyball:**

**Uni-edge ball:**

**Uni-node ball**or

**Fireball:**

**Hubball family:**

**Coreball family:**

**Remark:**

#### 3.2. Connection to Existing Exploration Samplings

**Snowball:**

**Forest Fire and Fireball:**

**Expander-Graph Ball:**

## 4. Theoretical Properties

**Property**

**1**

#### 4.1. Hubball Family

**Theorem**

**1.**

**Proof.**

**Lemma**

**1.**

**Proof.**

**Theorem**

**2.**

**Proof.**

#### 4.2. Coreball Family

**Theorem**

**3.**

**Proof.**

## 5. Experimental Evaluation

#### 5.1. Comparing to the Literature

**Transitivity:**

**Pagerank ratio:**

**Density:**

**IVIP and degree distribution:**

#### 5.2. Influence of the Spikyball Parameters

#### 5.3. Probability to Visit Influencers

## 6. Discussion

## 7. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Illenberger, J.; Flötteröd, G. Estimating network properties from snowball sampled data. Soc. Netw.
**2012**, 34, 701–711. [Google Scholar] [CrossRef] - Leskovec, J.; Kleinberg, J.; Faloutsos, C. Graphs over time: Densification laws, shrinking diameters and possible explanations. In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, Chicago, IL, USA, 21–24 August 2005; pp. 177–187. [Google Scholar]
- Maiya, A.S.; Berger-Wolf, T.Y. Sampling community structure. In Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, USA, 26–30 April 2010; pp. 701–710. [Google Scholar]
- Doerr, C.; Blenn, N. Metric Convergence in Social Network Sampling. In Proceedings of the 5th ACM Workshop on HotPlanet’13, Hong Kong, China, 12–16 August 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 45–50. [Google Scholar] [CrossRef] [Green Version]
- Wu, Y.; Cao, N.; Archambault, D.; Shen, Q.; Qu, H.; Cui, W. Evaluation of graph sampling: A visualization perspective. IEEE Trans. Vis. Comput. Graph.
**2016**, 23, 401–410. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Gjoka, M.; Kurant, M.; Butts, C.T.; Markopoulou, A. Practical recommendations on crawling online social networks. IEEE J. Sel. Areas Commun.
**2011**, 29, 1872–1892. [Google Scholar] [CrossRef] - Zhou, Z.; Zhang, N.; Das, G. Leveraging History for Faster Sampling of Online Social Networks. arXiv
**2015**, arXiv:1505.00079. [Google Scholar] [CrossRef] [Green Version] - Li, Y.; Wu, Z.; Lin, S.; Xie, H.; Lv, M.; Xu, Y.; Lui, J.C.S. Walking with Perception: Efficient Random Walk Sampling via Common Neighbor Awareness. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering (ICDE), Macao, China, 8–11 April 2019; pp. 962–973. [Google Scholar]
- Ribeiro, B.; Towsley, D. Estimating and sampling graphs with multidimensional random walks. In Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, Melbourne, Australia, 1–3 November 2010; pp. 390–403. [Google Scholar]
- Goodman, L.A. Snowball sampling. Ann. Math. Stat.
**1961**, 32, 148–170. [Google Scholar] [CrossRef] - Leskovec, J.; Faloutsos, C. Sampling from large graphs. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; pp. 631–636. [Google Scholar]
- Voudigari, E.; Salamanos, N.; Papageorgiou, T.; Yannakoudakis, E.J. Rank degree: An efficient algorithm for graph sampling. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Francisco, CA, USA, 18–21 August 2016; pp. 120–129. [Google Scholar]
- Hübler, C.; Kriegel, H.P.; Borgwardt, K.; Ghahramani, Z. Metropolis algorithms for representative subgraph sampling. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; pp. 283–292. [Google Scholar]
- Gjoka, M.; Kurant, M.; Butts, C.T.; Markopoulou, A. Walking in Facebook: A Case Study of Unbiased Sampling of OSNs. In Proceedings of the 2010 Proceedings IEEE INFOCOM, San Diego, CA, USA, 14–19 March 2010; pp. 1–9. [Google Scholar]
- Loukas, A.; Vandergheynst, P. Spectrally approximating large graphs with smaller graphs. arXiv
**2018**, arXiv:1802.07510. [Google Scholar] - Iwasaki, K.; Shudo, K. Comparing graph sampling methods based on the number of queries. In Proceedings of the 2018 IEEE International Conference on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom), Melbourne, Australia, 11–13 December 2018; pp. 1136–1143. [Google Scholar]
- De Choudhury, M.; Lin, Y.R.; Sundaram, H.; Candan, K.S.; Xie, L.; Kelliher, A. How does the data sampling strategy impact the discovery of information diffusion in social media? In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media, Washington, DC, USA, 23–26 May 2010. [Google Scholar]
- Maiya, A.S.; Berger-Wolf, T.Y. Benefits of bias: Towards better characterization of network sampling. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011; pp. 105–113. [Google Scholar]
- Rozemberczki, B.; Kiss, O.; Sarkar, R. Little Ball of Fur: A Python Library for Graph Sampling. arXiv
**2020**, arXiv:2006.04311. [Google Scholar] - Rozemberczki, B.; Allen, C.; Sarkar, R. Multi-scale Attributed Node Embedding. arXiv
**2019**, arXiv:1909.13021. [Google Scholar] - Rossi, R.A.; Ahmed, N.K. The Network Data Repository with Interactive Graph Analytics and Visualization. In Proceedings of the AAAI, Austin, TX, USA, 25–30 January 2015. [Google Scholar]

**Figure 1.**Snowball (

**left**) and Spikyball (

**right**) sampling example. The sampled nodes are colored in purple, pink, and green, the non-sampled ones are in grey. Starting from the central node (user defined), both samplings expand in successive hops, following neighbor connections (purple circle is 1-hop, pink circle is 2-hop, and green circle is 3-hop). The size of the nodes symbolizes their importance and the spikyball focus on collecting them in priority. This importance can be related to their degree or to some other attribute associated to them.

**Figure 2.**Degree distribution for different sampled graphs on Barabási-Albert (left) and Erdős-Renyi (right) random graphs. The sampling reduces the graph to 10% of its original size (50k nodes). The curve corresponding to the initial graph (not sampled) is in cyan. The different samplings provide close results. The slope is identical except for the Metropolis Hasting one, which is steeper for B-A and reaches a maximum shifted toward high degree nodes in E-R.

**Figure 3.**Degree distribution of the sampled nodes for Hubballs (left) and Coreballs (right) with different parameters, on the Facebook graph. For the sampled nodes, their degree in the sampled graph (top) and in the initial graph (bottom) lead to two different degree distributions. They are different since, in the sampled graph, edges to unsampled nodes are not present. The Hubball with α = 0 is equal to the Coreball with γ = 0, hence plots can be compared.

**Figure 4.**Degree distribution (left) and density (right) of the collected nodes on the Facebook graph. The sampling reduces the graph to 20% of its original size. The curve corresponding to the initial graph is in cyan.

**Figure 5.**Probability to visit a node with respect to its degree on the Facebook graph for several sampling schemes. Fireball and Edgeball methods have been omitted from the graphs for clarity. The experiment starts using four randomly-selected initial nodes, and is repeated 10 times. The Snowball result is plotted as an indicator of the value for the non-random case, with probability 1. Note that for the Snowball, although the propagation is non-random, the initial starting points are randomized between each run. (a) Ball with 2000 nodes (approx. 4 layers of Spikyball) and a selection of 10% of the nodes at each layer (b) Ball with 8000 nodes (approx. 8 layers) and the same selection rate. On the left, most of the nodes were visited only once during the 10 tests except for the highest degree ones (above 100 connections). The large 95% confidence interval indicates high variations in the results. On the right, The curve shows a more robust outcome with lower fluctuations and high probability to be visited when a node has a degree larger than 50 (more than 8 visits out of 10). Spikyball-based methods behave in a stabler manner than alternative sampling schemes.

**Table 1.**Datasets used in numerical experiments. Only the largest connected component has been kept in case of a disconnected graph.

$\left|\mathit{V}\right|$ | $\left|\mathit{E}\right|$ | |
---|---|---|

Facebook [20] | 22 k | 171 k |

Github [20] | 37.7 k | 289 k |

Google+ [21] | 202 k | 1.13 M |

Youtube [21] | 495 k | 1.93 M |

**Table 2.**Kolmogorov-Smirnov test for degree distribution similarity (lower is closer) for the Facebook and Github graphs sampled at 20%, Google+ and Youtube graphs sampled at 10%. The degree distribution is considered either fully (“full” column) or for high-degree (greater than the mean degree of all nodes in the graph) nodes only.

Facebook (20%) | Github (20%) | Google+ (10%) | Youtube (10%) | |||||
---|---|---|---|---|---|---|---|---|

full | $\mathit{d}>$mean | full | $\mathit{d}>$mean | full | $\mathit{d}>$mean | full | $\mathit{d}>$mean | |

Metropolis-Hastings | 0.118 | 0.127 | 0.174 | 0.059 | 0.216 | 0.105 | 0.237 | 0.020 |

CNRW | 0.227 | 0.053 | 0.278 | 0.076 | 0.591 | 0.153 | 0.362 | 0.178 |

CNARW | 0.181 | 0.038 | 0.273 | 0.065 | 0.489 | 0.161 | 0.353 | 0.141 |

Forest Fire | 0.160 | 0.016 | 0.281 | 0.059 | 0.529 | 0.102 | 0.318 | 0.141 |

Fireball | 0.266 | 0.077 | 0.306 | 0.077 | 0.557 | 0.113 | 0.396 | 0.169 |

Edgeball | 0.268 | 0.084 | 0.211 | 0.035 | 0.638 | 0.164 | 0.342 | 0.169 |

Hubball | 0.243 | 0.096 | 0.112 | 0.027 | 0.573 | 0.140 | 0.279 | 0.099 |

Coreball | 0.537 | 0.183 | 0.520 | 0.056 | 0.702 | 0.188 | 0.499 | 0.223 |

Snowball | 0.224 | 0.082 | 0.073 | 0.022 | 0.482 | 0.052 | 0.279 | 0.125 |

**Table 3.**Numerical results for the Facebook graph sampled at 10% and 20%. Full graph’s average clustering coefficient is 0.36, transitivity ratio is 0.23, density is 6.7 × 10

^{−4}. Relative error for the average clustering coefficient and transitivity ratio is shown between parentheses.

Sampling Method | Avg. Clustering Coef. | Transitivity Ratio | PageRank Ratio | Density | |
---|---|---|---|---|---|

Metropolis-Hastings | 10% | 0.420 (16.83%) | 0.405 (76.25%) | 1.490 | 4.91 × 10^{−4} |

20% | 0.409 (12.57%) | 0.330 (42.15%) | 1.463 | 3.08 × 10^{−3} | |

CNRW | 10% | 0.464 (29.06%) | 0.337 (45.15%) | 2.007 | 9.52 × 10^{−3} |

20% | 0.445 (23.78%) | 0.299 (28.66%) | 1.771 | 5.22 × 10^{−3} | |

CNARW | 10% | 0.369 (2.49%) | 0.287 (23.42%) | 2.057 | 7.99 × 10^{−3} |

20% | 0.375 (4.19%) | 0.274 (17.77%) | 1.757 | 4.75 × 10^{−3} | |

Forest fire | 10% | 0.320 (11.10%) | 0.291 (25.23%) | 1.773 | 8.49 × 10^{−3} |

20% | 0.411 (14.37%) | 0.275 (18.42%) | 1.600 | 4.46 × 10^{−3} | |

Fireball | 10% | 0.413 (14.82%) | 0.303 (30.32%) | 1.887 | 1.09 × 10^{−2} |

20% | 0.392 (8.95%) | 0.254 (9.12%) | 1.650 | 5.96 × 10^{−3} | |

Edgeball | 10% | 0.441 (22.59%) | 0.238 (2.44%) | 1.854 | 1.50 × 10^{−2} |

20% | 0.449 (24.73%) | 0.257 (10.53%) | 1.664 | 6.23 × 10^{−3} | |

Hubball | 10% | 0.484 (34.46%) | 0.220 (5.51%) | 1.805 | 1.41 × 10^{−2} |

20% | 0.460 (27.94%) | 0.223 (4.02%) | 1.619 | 6.46 × 10^{−3} | |

Coreball | 10% | 0.476 (32.32%) | 0.291 (25.39%) | 2.030 | 1.97 × 10^{−2} |

20% | 0.461 (28.23%) | 0.300 (29.20%) | 1.823 | 7.57 × 10^{−3} | |

Snowball | 10% | 0.373 (3.81%) | 0.245 (5.66%) | 1.750 | 1.22 × 10^{−2} |

20% | 0.399 (10.87%) | 0.243 (4.48%) | 1.660 | 5.69 × 10^{−3} |

**Table 4.**Numerical results for the Github graph sampled at 10% and 20%. Full graph’s average clustering coefficient is 0.168, transitivity ratio is 1.24 × 10

^{−2}, density is 4.07 × 10

^{−4}. Relative error for the average clustering coefficient and transitivity ratio is shown between parentheses.

Sampling Method | Avg. Clustering Coef. | Transitivity Ratio | PageRank Ratio | Density | |
---|---|---|---|---|---|

Metropolis-Hastings | 10% | 0.110 (34.64%) | 0.056 (355.32%) | 2.070 | 2.56 × 10^{−3} |

20% | 0.166 (1.22%) | 0.032 (160.00%) | 2.096 | 2.18 × 10^{−3} | |

CNRW | 10% | 0.237 (41.31%) | 0.061 (397.52%) | 3.763 | 6.74 × 10^{−3} |

20% | 0.214 (27.73%) | 0.041 (227.87%) | 2.631 | 3.55 × 10^{−3} | |

CNARW | 10% | 0.211 (25.79%) | 0.061 (397.22%) | 3.778 | 6.75 × 10^{−3} |

20% | 0.186 (10.72%) | 0.041 (232.39%) | 2.645 | 3.54 × 10^{−3} | |

Forest fire | 10% | 0.238 (41.91%) | 0.063 (408.04%) | 3.656 | 6.30 × 10^{−3} |

20% | 0.190 (13.41%) | 0.038 (207.28%) | 2.607 | 3.59 × 10^{−3} | |

Fireball | 10% | 0.246 (46.53%) | 0.048 (288.58%) | 3.243 | 5.45 × 10^{−3} |

20% | 0.207 (23.26%) | 0.040 (225.95%) | 2.583 | 3.75 × 10^{−3} | |

Edgeball | 10% | 0.310 (85.18%) | 0.040 (223.09%) | 3.036 | 4.65 × 10^{−3} |

20% | 0.247 (47.28%) | 0.035 (184.11%) | 2.452 | 3.09 × 10^{−3} | |

Hubball | 10% | 0.373 (122.79%) | 0.025 (98.76%) | 2.594 | 3.94 × 10^{−3} |

20% | 0.348 (107.42%) | 0.020 (61.37%) | 2.092 | 2.48 × 10^{−3} | |

Coreball | 10% | 0.252 (50.12%) | 0.054 (337.38%) | 4.247 | 1.05 × 10^{−2} |

20% | 0.188 (12.06%) | 0.039 (214.62%) | 2.968 | 4.84 × 10^{−3} | |

Snowball | 10% | 0.435 (159.64%) | 0.023 (86.65%) | 2.987 | 4.28 × 10^{−3} |

20% | 0.393 (134.28%) | 0.019 (49.94%) | 2.027 | 2.48 × 10^{−3} |

**Table 5.**Numerical results for the Google+ graph sampled at 10%. Full graph’s average clustering coefficient is 0.148, transitivity ratio is 0.238, density is 5.57 × 10

^{−5}. Relative error for the average clustering coefficient and transitivity ratio is shown between parentheses.

Sampling Method | Avg. Clustering Coef. | Transitivity Ratio | PageRank Ratio | Density |
---|---|---|---|---|

Metropolis-Hastings | 0.255 (72.46%) | 0.312 (30.50%) | 1.457 | 6.74 × 10^{−4} |

CNRW | 0.397 (168.32%) | 0.308 (28.85%) | 1.910 | 2.71 × 10^{−3} |

CNARW | 0.284 (92.07%) | 0.303 (26.89%) | 1.909 | 2.37 × 10^{−3} |

Forest fire | 0.317 (114.05%) | 0.298 (24.65%) | 1.859 | 2.10 × 10^{−3} |

Fireball | 0.346 (133.83%) | 0.317 (32.72%) | 1.839 | 2.33 × 10^{−3} |

Edgeball | 0.363 (145.33%) | 0.264 (10.37%) | 1.744 | 3.06 × 10^{−3} |

Hubball | 0.397 (168.16%) | 0.264 (10.64%) | 1.545 | 2.79 × 10^{−3} |

Coreball | 0.400 (170.69%) | 0.296 (23.83%) | 1.940 | 3.23 × 10^{−3} |

Snowball | 0.360 (143.13%) | 0.285 (19.50%) | 1.576 | 1.97 × 10^{−3} |

**Table 6.**Numerical results for the Youtube graph sampled at 10%. Full graph’s average clustering coefficient is 0.11, transitivity ratio is 8.8 × 10

^{−3}, density is 1.57 × 10

^{−5}. Relative error for the average clustering coefficient and transitivity ratio is shown between parentheses.

Sampling Method | Avg. Clustering Coef. | Transitivity Ratio | PageRank Ratio | Density |
---|---|---|---|---|

Metropolis-Hastings | 0.115 (4.24%) | 0.052 (487.46%) | 2.930 | 2.07 × 10^{−4} |

CNRW | 0.153 (39.22%) | 0.040 (354.69%) | 4.201 | 4.65 × 10^{−4} |

CNARW | 0.125 (13.55%) | 0.041 (364.51%) | 4.210 | 4.58 × 10^{−4} |

Forest fire | 0.146 (32.40%) | 0.035 (299.01%) | 4.050 | 4.05 × 10^{−4} |

Fireball | 0.154 (40.13%) | 0.038 (332.71%) | 3.967 | 4.90 × 10^{−4} |

Edgeball | 0.186 (69.20%) | 0.028 (221.57%) | 3.886 | 4.57 × 10^{−4} |

Hubball | 0.306 (177.98%) | 0.009 (6.58%) | 3.219 | 3.74 × 10^{−4} |

Coreball | 0.143 (30.13%) | 0.038 (331.17%) | 4.526 | 5.91 × 10^{−4} |

Snowball | 0.294 (167.08%) | 0.009 (3.64%) | 3.620 | 3.81 × 10^{−4} |

**Table 7.**IVIP score (higher is better) for the different datasets, averaged over 10 sampling runs for each dataset (standard deviation is shown between parentheses).

Github | Google + | Youtube | ||||
---|---|---|---|---|---|---|

10% | 20% | 10% | 20% | 10% | 10% | |

Metropolis-Hastings | 0.200 (0.022) | 0.378 (0.028) | 0.299 (0.020) | 0.538 (0.019) | 0.291 (0.019) | 0.367 (0.009) |

CNRW | 0.322 (0.009) | 0.531 (0.013) | 0.513 (0.004) | 0.704 (0.002) | 0.644 (0.004) | 0.573 (0.001) |

CNARW | 0.325 (0.011) | 0.523 (0.010) | 0.517 (0.004) | 0.709 (0.003) | 0.613 (0.004) | 0.574 (0.001) |

Forest Fire | 0.272 (0.026) | 0.451 (0.055) | 0.502 (0.009) | 0.698 (0.021) | 0.549 (0.034) | 0.540 (0.033) |

Fireball | 0.278 (0.030) | 0.482 (0.025) | 0.476 (0.028) | 0.685 (0.019) | 0.580 (0.024) | 0.547 (0.022) |

Edgeball | 0.353 (0.010) | 0.552 (0.009) | 0.430 (0.027) | 0.653 (0.011) | 0.658 (0.004) | 0.549 (0.006) |

Hubball | 0.354 (0.010) | 0.557 (0.005) | 0.333 (0.049) | 0.555 (0.031) | 0.630 (0.008) | 0.450 (0.009) |

Coreball | 0.413 (0.013) | 0.613 (0.013) | 0.584 (0.017) | 0.801 (0.005) | 0.698 (0.003) | 0.643 (0.002) |

Snowball | 0.274 (0.053) | 0.471 (0.035) | 0.387 (0.066) | 0.632 (0.069) | 0.546 (0.052) | 0.449 (0.037) |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ricaud, B.; Aspert, N.; Miz, V.
Spikyball Sampling: Exploring Large Networks via an Inhomogeneous Filtered Diffusion. *Algorithms* **2020**, *13*, 275.
https://doi.org/10.3390/a13110275

**AMA Style**

Ricaud B, Aspert N, Miz V.
Spikyball Sampling: Exploring Large Networks via an Inhomogeneous Filtered Diffusion. *Algorithms*. 2020; 13(11):275.
https://doi.org/10.3390/a13110275

**Chicago/Turabian Style**

Ricaud, Benjamin, Nicolas Aspert, and Volodymyr Miz.
2020. "Spikyball Sampling: Exploring Large Networks via an Inhomogeneous Filtered Diffusion" *Algorithms* 13, no. 11: 275.
https://doi.org/10.3390/a13110275