Filtering Statistics on Networks

We explored the statistics of filtering of simple patterns on a number of deterministic and random graphs as a tractable simple example of information processing in complex systems. In this problem, multiple inputs map to the same output, and the statistics of filtering is represented by the distribution of this degeneracy. For a few simple filter patterns on a ring we obtained an exact solution of the problem and described numerically more difficult filter setups. For each of the filter patterns and networks we found a few numbers essentially describing the statistics of filtering and compared them for different networks. Our results for networks with diverse architectures appear to be essentially determined by two factors: whether the graphs structure is deterministic or random, and the vertex degree. We find that filtering in random graphs produces a much richer statistics than in deterministic graphs. This statistical richness is reduced by increasing the graph's degree.


Introduction
Filtering is the processing of an input signal to produce an output signal according to some rule, based on the content of the input. The filter does not add information, with the number of possible outputs being less than (or at most equal to) the number of possible inputs. Thus, outputs are degenerate: multiple inputs map to the same output. Even very simple filters can produce a complex distribution of degeneracies [1]. This characteristic, of a nontrivial mapping of a configuration space to a smaller set of final configurations, also appears in sampling, compression and more general information processing [2,3], and in numerous complex systems, including the basins of attraction of local minima in spin glasses, and deep learning neural networks [4][5][6]. Understanding the statistics of degeneracies can give important insight into these systems. In a previous work [1], we showed that a simple filtering problem produces analogous behaviour of the degeneracy distribution to these more complex systems, and that one can obtain exact results up to large system sizes that are simply not accessible in more complex problems.
Numerous studies have shown that the heterogeneous structure of interactions between elements of a complex system, usually represented as a complex network, can have a profound effect on the properties of the system [7]. Here we examine a simple filtering process on a network. The input consists of the binary states of nodes in a given network. The filter outputs a 1 for every instance of a particular pattern of states on a node and its immediate neighbours, and a 0 when the pattern is absent. This generalises the filtering problem examined in Ref. [1] for binary inputs in a cyclical string (ring). The process applied to a small graph is represented in Figure 1. We studied this problem on a variety of degree-regular graphs. We studied this problem on a variety of degree-regular graphs. We show that one may find the exact degeneracy distribution corresponding to the complete set of all possible inputs, up to relatively large system sizes, for any given graph. Just as in our previous study on rings, we show that the principal characteristics of the degeneracy distribution are described asymptotically by three key numbers. These numbers may be obtained exactly by simple arguments.

SR WR
Input: Outputs: Figure 1. Application of different filters to a set of zeros and ones place on a graph. Each node of the input and output graphs is in one of two states, namely 0 (open circles) or 1 (closed circles). In the SR filter, an output node is one only when the corresponding input node is one and all its neighbours are zero. In the WR filter, an output node is one when the corresponding input node is one and one or more of its neighbours are zero.
This problem serves as a tractable simple model to explore information processing in complex systems. In a graph, the connections between nodes create complex interactions between the filter output at each node. We show that the degeneracy distribution correctly captures this complexity. In particular, the entropy of the degeneracy distribution, called the relevance [8] is lower in deterministically constructed graphs, and higher in random graphs. We show that relevance is maximum when the graph degree takes its smallest value greater than two. We compared two different filters, and found that the stronger filter (detecting less easily satisfied conditions) is more informative, because it is more sensitive to the state of neighboring nodes. Interestingly, as Figure 6 demonstrates, our results for regular graphs of diverse architectures essentially depend only on a vertex degree.

Filtering statistics on a ring
For orientation, we begin by studying nodes located on a ring. The input is a set of N strings of zeroes and ones {x i }, x i = 0, 1, of length n, assuming the periodic condition x 1 = x n+1 . We consider the complete set of all possible unique inputs. Its size N is determined by the size n of inputs, N = 2 n .
The filter works as follows: every instance of a specific pattern in the input (a short sequence of ones and zeroes) is marked by a one in the corresponding position in the output. All other positions are marked with zeroes. Multiple inputs correspond to the same output, creating a distribution of degeneracies of the outputs. We illustrate the results from a simple example filter pattern in Figure 2 (a) and (b). We observe complex degeneracy distributions reminiscent of those observed in, for example, Ref. [9].
The filter pattern may be arbitrary, but for illustrative purposes we will consider in particular a family of filters consisting of a string of ones with zeroes at either end: 010, 0110, 01110, etc. The length of the filter, w, can be used as a crude control parameter to observe the effects on resolution and relevance (see below). For convenience, we use the notation 1 l to indicate a chain of l ones. Thus the filter of length w is 01 w−2 0. In principle, for each of the 2 n possible inputs we can obtain, one by one, an output numerically. In practice, we use a more efficient algorithm described in Ref. [1]. Other types of filter patterns on a ring may be analyzed using the same methods.
In particular, the total number of outputs is given by M(n) = N cum (d 1 ). The cumulative degeneracy distribution is broad, but decays more rapidly than a power law.
The tail of the cumulative distribution has a notably complex structure resembling a staircase, with steep jumps between steps. The heights of these jumps are especially large in the region of high degeneracies. Similar structures may be observed in real systems, see for example Figure 3 of Ref. [9]. As shown in Ref. [1], when the number of ones in the output is few, and some or all of them are separated by large gaps, such outputs have very similar but not exactly equal degeneracies for finite n. These closely located degeneracies lead to the staircase structure observed in the cumulative distribution.
Let us consider the evolution of the degeneracy distribution (and cumulative distribution) with input size n. The largest degeneracy d D (n) corresponds to the output with all zeroes, and for large n, grows as d D (n) ∼ = z n d , where the value of z d depends on the specific filter. Naturally N (d D , n) = 1. The number of outputs with degeneracy 1 behaves as N (1, n) ∼ = z n a . Meanwhile the total number of outputs, M(n) is asymptotically M(n) ∼ = z n g . Together, these three key constants, z d , z g and z a , delimit the asymptotic behaviour of the degeneracy distribution [1]. We list these numbers for a selection of short filter patterns on a ring in Table 1.
Rather surprisingly, one may obtain these asymptotic behaviours, and exact expressions for the constants z g , z d and z a through simple arguments. Each output consists of isolated ones separated by strings of zeroes of various lengths. By careful consideration of how valid outputs for a larger n can be constructed by adding specific segments to shorter outputs, one may construct recursive relations for the key quantities M(n), d D (n) and N(1, n), whose asymptotics are given by z g , z d and z a . To demonstrate this, we focus on the particular family of filter patterns consisting of a chain of ones with a zero at each end. The shortest such pattern is 010. Each member of this set may be indexed by the length of the filter, w ≥ 3. The filter pattern length w determines the minimum number of zeroes, w − 2, between each one. We give the derivation of z d , z g and z a for any w in Section 4.2 below.

Effect of filter length
In analogy with complex systems, we can consider each filter pattern as sampling the hidden state of a complex system [1]. The length of these filters acts as a crude control parameter of our sampling. Intuitively, we expect shorter filters to be more informative. The resolution of a sampling process, defined as the entropy of a sample: is a measure of the ability to distinguish, at the output, between different input states [8]. It takes its maximum value when there is a different output for each input. However in this case all outputs are distinct, and so these filters are not informative about the system being sampled. As shown in Ref. [8], the informativeness of a sample is captured by a different entropy measure, the relevance, defined as Results for a variety of short filter patterns are given in Table 1. The family of filters composed of a string of ones with a zero at each end, 010, 0110, 01110, etc., are indicated in boldface in the Table. As can be seen in the Table, the relevance is greater for shorter filters, but is actually zero for the shortest possible filters 0 and 1. The filter pattern 1 trivially reproduces the input, while 0 it's inverse, and all outputs have degeneracy one. Within the family of filters 01 w−2 0, the relevance is maximised for w = 3.
Filter patterns of length two begin to have nontrivial properties. For the pattern 01, the number of outputs with degeneracy 1, N(1, n), is either 0 (when n is odd) or 2 (when n is even), so z a = 1. This is because the only outputs that have degeneracy one are periodic sequences of alternating 0's and 1'sthere are two of these sequences n is even, and none when n odd. The maximum degeneracy d D (n) for this pattern grows by an integer factor of 4 for an increment in n of 5. In fact it can be written explicitly, where the coefficient of 4 n/5 equals (3/4 4/5 ) 0 = 1, (3/4 4/5 ) 4 = 0.959164, (3/4 4/5 ) 3 = 0.969214, (3/4 4/5 ) 2 = 0.979369, and (3/4 4/5 ) 1 = 0.989631 for mod(n, 5) = 0, 1, 2, 3, and 4, respectively. As a result, the number z d , which gives the asymptotic behaviour of the maximum degeneracy d D , is equal to 4 1/5 . As can be seen in Figure 5 (c), the degeneracy distribution of the filter 01 does not have the characteristic shape, and the broad tailed cumulative distribution seen in other filters. The filter pattern 00 already produces more complexity, see Figure 5 (a). The degeneracy distribution and the cumulative distribution already have the shape and complexity seen in longer filters [1]. Curiously N(1, n) = d D (n) + i n + (−i) n (where i is the imaginary unit) where the last two terms give 0, 2, 0, −2, 0, 2, 0, −2, 0, . . . for n = 3, 4, 5, 6, . . . . This means that z d = z a ≈ 1.618. Table 1. Values of the numbers z g , z d , and z a for different filters. Note that we also included filter patterns consisting of all zeroes. For each filter we also give the relevance per node H[d]/n (in nats) calculated from the degeneracy distribution and the resolution per node H[y]/n. For the sake of comparison, the standard entropy of the inputs of this size is H/n = ln 2 = 0.69315. Finally we include the number of distinct degeneracies D for each pattern. Inputs of size n = 36 were used except for filters 00 and 10, for which n = 34, and 000 for which n = 35. Values for D for these three filters were extrapolated to n = 36 for comparison with other results. The largest degeneracies behave as ∼ = z n d for large n. The number z d quickly approaches 2 as the filter pattern length increases. Since N = 2 n , this means that almost all outputs concentrate in a few outputs, and in the limit, in a single state, i.e. all outputs are the same and the filter patterns are not informative. For the shortest filter patterns, the value of z d falls rapidly, while the relevance increases, indicating a transition to informative sampling. On the contrary, z g , which gives the total number of outputs M(n), increases with decreasing filter length, as shorter filters have more possible outputs. Taken together, these results indicate that the maximally informative sampling for a given family of filters is the shortest pattern having length greater than 1. This behavior is analogous to the transition observed in more complex problems (see for example [10]).
Note that one may also consider filters constructed as logical combinations of more than one pattern. For example, there are 3 kinds of 'OR' filters of size 2 + 2 (in fact, there are (16 − 4)/2 = 6 combinations of different filters, but some are equivalent in terms of degeneracies). All of these OR filters have trivial degeneracies: 01 OR 10 detects when the next digit is different from the current one. Given the value of an input digit we can completely reconstruct that input, and, since the first input digit has 2 possible values, each output has degeneracy 2. The only degeneracy in the spectrum is 2 and its frequency is 2 n−1 , so z d = 1 and z g = 2. There are no outputs of degeneracy one, N(1, n) = 0. 00 OR 11 detects when the next digit is the same as the current one. The same reasoning as for the filter 01 OR 10 applies here: we can reconstruct the input completely from the output if we know a single digit of the input. Finally 11 OR 10 (which is the same as 11 OR 01, 00 OR 10 and 00 OR 01) is equivalent to the filter 1 of length 1.

Filtering on graphs
The process described in the previous Section may be generalised to an arbitrary graph as follows. The input consists of the binary status for each node in the graph. We filter for a particular condition of the state of a node and of its immediate neighbors. If the state of the node and its neighbours matches the filter pattern, the output for that node is 1, otherwise it is 0. We consider two examples: Firstly, we set the output to 1 if the selected node has state 1 and all of its neighbours have state 0 (we refer to this filter as the strong rule, or SR). This filter applied on a ring is equivalent to the pattern 010 discussed in the previous Section. Secondly, we apply a less selective filter, outputting 1 if a node is in state 1 and any of its neighbours has state 0 (we call this filter the weak rule, or WR). We illustrate the application of these two filter patterns to a small graph in Figure 1.
These filters were applied to several families of degree-regular graphs. These were chosen to have a variety of structures and to vary in the degree of randomness in their construction, while being of comparable size and degree. We considered the following families of graphs: Small world graphs. These graphs created by placing all nodes in a ring, and adding shortcuts between nodes to reach the desired degree. The locations of shortcuts were either random -we use the code SW(q) for these graphs, where q is the graph degree -or in a deterministic way -SWB(q); Random regular graphs (RRG); Tori, which are two dimensional square lattices with cyclic boundary conditions; Cages. These are graphs defined by two numbers, the degree q and the shortest cycle length g. A (q,g)-cage is the graph fulfilling these properties while having the smallest possible numbers n of nodes [11]. For each family of graphs we considered different sizes, up to at least n = 30, and where possible, degrees, from q = 2 up to q = 5. Finally we investigated the second and third Apollonian networks (Apollonian 2 and 3), which are the only graphs here that are not degree regular.     We give some examples of the resulting degeneracy distributions and cumulative degeneracy distributions, for the SR filter, in Figures 3, for random graphs, and 4, for deterministically generated graphs. Note that the distributions for random graphs correspond to a single realization of the graph. We see that there is a dramatic difference in the distribution for random graphs between degree two and degree three. The degree two random regular graph necessarily consists of one or several closed rings, and the distribution is little different than that shown in Figure 2 (a). For degree three, there is a great deal of randomness in the formation of the graph, and this is reflected in the degeneracy distribution, which becomes much more dense, having a fine structure not observed in deterministic graphs. For higher degrees, the distribution becomes less broad, and as we will discuss below, this corresponds to a reducing relevance with increasing degree.
We have not included examples of the distributions for the "small world" graphs. The deterministic small world graphs, SWB(q), produce distributions almost indistinguishable from those for other deterministic graphs of the same degree, while the random small world graphs, SW(q), generate degeneracy distributions very similar to those found for random regular graphs. For completeness, we give the degeneracy distributions and cumulative distributions for the same graphs using the WR filter in Figures A1 and A2 in Appendix A. We plot some examples of some less typical degeneracy distributions in Figure 5. These are the 00 and 10 filters applied on a ring, and the SR filter applied to Apollonian networks (which are not degree regular).   In Figure 6 we represent various quantities of interest as a function of graph degree, for the different graph families studied. We see that there is a clear separation in results between the two filters.
The weak filter (WR) detects when a node has state 1 while having at least one immediate neighbor with state 0. This neighbour condition is more easily satisfied the larger the number of neighbours q. Thus for large q, the number of possible outputs M(n) for the WR filter approaches the number of possible inputs, 2 n . We see in panel (e) that indeed the n th root of M(n), which tends to z g for large n, approaches 2 for large q. By the same token, most outputs have a degeneracy of one, so the number of outputs of degeneracy one, N(1, n) also approaches 2 n (z a approaching 2) for large q [panel (f)], with while the largest degeneracy d D (n) (whose asymptotic behaviour is given by z d ) grows only slowly with n, [panel (d)]. The resolution H[y] measures how well the filter distinguishes different inputs, and as we see in panel (b) of Figure 6, and in agreement with the above observations, the resolution for the weak filter is high. The maximum possible value of H[y] is n ln 2, corresponding to a value of H[y]/n = 0.693... in the figure. We see that the resolution is already close to this value at q = 5.
The correct measure of how informative a sample of the observable variables of a complex system is about the underlying system is the relevance [8], H[d]. Such sampling is represented in our problem as the filtering process, and the interactions of the system by the graph structure. The importance of the relevance is confirmed by our results, as shown in Figure 6 (a). A higher relevance is measured in graphs having some randomness in their structure, while deterministic and regular graphs have lower relevance. This is particularly true for the strong filter SR, which produces a significantly higher relevance for random regular graphs (RRG) and rings with random shortcuts (SW), compared with rings with deterministic shortcuts (SWB) and cages. The effect for the WR filter is much less pronounced.
The highest relevance occurs at degree q = 3. The explanation for this is clear. As shown above, and in [1], smaller filters generally produce higher relevance, as there are more outputs than for larger filters, except in the extreme limit of perfect reproduction of the input (maximum resolution). Thus we would expect lower values of q, which correspond to smaller SR filters, to have higher relevance. Meanwhile, and opposing this trend, graphs of degree q = 2 are necessarily either rings or sets of rings, which thus have a (nearly) deterministic structure and suffer a penalty in relevance. Notice also the similarity of the degeneracy distributions for q = 2 in Figures 3 and A1. As can be seen in the figure, the reduction in relevance in moving from q = 3 to q = 2 due to this regularity outweighs the expected increase due to the filter being smaller. To put it another way, the maximum relevance occurs at the smallest value of q for which the graph is non deterministic. This echoes our finding for filters on rings, for which the maximum relevance is found for the shortest filter which doesn't trivially reproduce the input [1]. We show the degeneracy ditribution for q = 3 for a deterministic graph in Figure 4, and for a random graph in Figure 3.
For the SR filter, in contrast to the weak filter, there is significant degeneracy of the outputs. The number of outputs is significantly less than the number of inputs, as is the number of outputs with degeneracy one. Similarly, the resolution is small for the SR filter, for all graph families, and decreases with q. The largest degeneracy, d D , on the other hand, does become very large. In the limit of large q, a large fraction of possible outputs give the same single output (all zeroes). In Figure 6 (c), the behaviour of the number of degeneracies, D(n) noticeably mirrors that of the relevance, H[d]. Note that data points for random graphs are averaged over several realisations of the graph.
In Table 2 we list the key degeneracy distribution statistics for the SR filter, for all families of graphs studied. Corresponding results for the WR filter may be found in Table A1. In addition to representing the data highlighted in Figure 6 in the quantitative form, these tables demonstrate the size effects with exponentially rapid convergence to the infinite n limit. In this work, we are mainly interested in regular graphs (graphs where nodes have a uniform degree), because we can better isolate the effects of varying the graph's degree. Nevertheless, for the sake of completeness, we also present results for a few examples of non-regular graphs, namely Apollonian networks. In Tables 2 and A1, each group of rows delimited by horizontal lines represents a different class of graphs. The four classes at the top of the tables, namely Apollonian networks, cage graphs, square lattices with periodic boundary conditions (torus), rings with deterministic shortcuts, are deterministic graphs, while the two remaining classes represent random models, namely random regular graphs, and rings with random shortcuts. The numbers presented for the random models result from averaging over 10 realizations sampled uniformly at random.
We include results for graphs of several sizes for each type of graph. This allows one to see the convergence of values with increasing n. Within the set of consecutive rows of each class, the graphs are ordered by ascending degree, then by ascending number of nodes. The exception to this organization is the two first rows, which are for the non-regular Apollonian networks. All of these numbers, as well as the number of degeneracies D, for n = 30 are plotted in Figure 6. Table 2. Important values for the degeneracy distribution resulting from applying the strong rule (SR) filter to various graphs. The numbers n M(n), n d D (n) and n N(1, n) approximate z g , z d and z a respectively. We also give the relevance per node H[d]/n and the resolution per node H[y]/n. Numbers for RRG(q) and SW(q) were obtained by averaging over 10  For fully connected graphs, both the strong and the weak rules produce trivial output and degeneracy distributions. Using the strong rule, for an output node y i to be 1, we must have x i = 1 and all other inputs x j =i = 0. So, when there is a 1 in the output string, we have y i = x i . There are n of these outputs, and their degeneracy is 1. Since, there can be no more than a single 1 in the output string, the only other possible output is a string of n zeros, which has degeneracy 2 n − n. In this case there are only two degeneracies in the degree distribution d 1 = 1 and d 2 = 2 n − n, and their frequencies are N(d 1 , n) = n, and N(d 2 , n) = 1, respectively.
On a fully connected graph under the weak rule, for an output node y i to be 1, it is enough to have x i = 1 and just one other input x j =i = 0. Therefore, when one or more of the inputs x i is 0 the output is equal to the input. The only situation in which the output does not match the input is for an input string of all 1's, in which case the output is a strings of 0's. The weak rule also produces only two degeneracies d 1 = 1 and d 2 = 2, with frequencies N(d 1 , n) = 2 n − 2, and N(d 2 , n) = 1.
It is worth noticing the relation with the class of cage graphs, which we have studied here: (q, 3)-cage graphs are fully connected graphs with q + 1 nodes, while (q, 4)-cages are bipartite graphs with two fully connected layers of q nodes each. Bipartite graphs with two fully connected layers of the same size also result in trivial degeneracy distributions in both the strong and weak rules. With the strong rule applied to such a bipartite graph, for an output y i to be 1 we must have all inputs in the opposite layer to be x i = 0. Conversely, when one input of one of the layers is x i = 1 all the outputs of the other layer are 0. So, when all the input digits of one of the layers are all equal to 0 the outputs equal the inputs, y i = x i , and when there are 1's in both layers of the input, the output is all 0's. In this case the degeneracy distribution also contains just two degeneracies, d 1 = 1 and d 2 = 2 n − 2 n/2+1 , with frequencies N(d 1 , n) = 2 n/2+1 and N(d 2 , n) = 1, respectively (notice there are n/2 nodes in each layer). With the weak rule applied to symmetrical fully connected bipartite graphs, for an output y i to be 1 it is enough to have just one x i = 0 in the opposite layer. Therefore, all inputs with at least a 0 in each layer produce an output y i = x i . All inputs with at least one 0 in layer α and only 1's in layer β produce an output consisting of all 0's in layer α and all 1's in layer β. Finally, if the input contains no 0's in either layer, the output is y i = 0 for all i. Therefore, we have d 1 = 1, d 2 = 2, and d 3 = 2 n/2 − 1, with frequencies N(d 1 , n) = 2 n − 2 n/2 − 1, N(d 2 , n) = 1, and N(d 3 , n) = 1, respectively.
From the trivial degeneracy distribution of these examples of graphs, i.e., fully connected and bipartite fully connected, we see that the entropies approach trivial limits for large system sizes. Namely, for the strong rule, using Eqs. (1) and (2) for the output and degeneracy entropies, respectively, we see the in both types of graphs H[y] and H[d] both approach 0, since the distribution is dominated by a single degeneracy d ∼ = 2 n with N(d, n) = 1. With the weak rule,the entropy H[y]/n approaches ln 2 = 0.693. . . and H[d] approaches 0. In general, we expect that the entropies approach these limits was we increase the degree of the graphs generated by any model. Interestingly, this effect is already quite visible in Tables 2 and A1, when we compare the values of the entropy for different degrees within each class of graphs, even for degrees up to only 5.

Discussion
In Ref. [1] we introduced a simple filtering problem which produces a rich and complex distribution of output degeneracies. The input is a cyclic sequence of zeroes and ones (a ring), and the process outputs a one in any position where a particular short pattern occurs, and a zero otherwise. The tractability of the problem means that we are able to give the complete degeneracy distribution, for the set of all possible inputs, up to relatively large system sizes.
In this paper, we have extended this problem to consider general graphs. The input is a digit 1 or 0 assigned to each node of the graph, and the output for each node is 1 if the state of the node and those of its immediate neighbours match a given filter pattern, and 0 otherwise. We demonstrate this process by calculating the full degeneracy distributions for various degree regular graphs with 30 or more nodes, using two example filter patterns. The weak (WR) pattern registers a 1 if the corresponding node has state 1 and at least one of its neighbours has state 0. The strong (SR) pattern only registers 1 if the node is in state 1 and all of its neighbours are in state 0. We found degeneracy distributions having similar form and features to those seen in the simpler problem of filtering on a ring. We showed that three key features of the degeneracy distribution: the largest degeneracy d D (n), the number of distinct outputs M(n) and the number of outputs having degeneracy one, N(1, n) behave as z n d , z n g and z n a , respectively, where the three numbers z d , z g and z a take values from 1 to 2 depending on the graph and the filter. We find precise values for these three numbers for all the graphs studied.
The two filter examples used give quite different results, and have different behaviour with respect to graph degree. The key results are summarised by our main figure, Figure 6. The weak rule filter, WR, is only weakly sensitive to the neighborhood of a node, and hence the structure of the graph. For large degree, it almost always produces an output matching the input. Thus the WR filter produces large values for the ouput entropy, called the resolution, and small values for the degeneracy entropy, the relevance.
The strong rule filter, SR, on the other hand, imposes a condition on all the neighbours of the node where the filter is applied. This produces a much larger relevance (which is a measure of the informativeness of the filtering process) in random graphs, but much lower resolution, as the number of unique outputs is restricted. The relevance is largest for the smallest graph degree not equal to two. Deterministically constructed graphs do not demonstrate the same peak in relevance, underlining the importance of this measure for detecting complexity. For larger degree, the condition becomes more restrictive, so the number of outputs is reduced. The resolution decreases with increasing q, but so does the relevance. The reason that the q = 2 graphs do not give the maximum relevance is that these graphs necessarily have a highly predictable structure. All nodes lie in one or at most a few rings. One may observe that the degeneracy distributions and corresponding statistics are very similar for all families of graphs studied when q = 2. The fact that results are largely determined by degree, indicates that it should be possible to write a mean field theory for the degeneracy distribution.
Similar complexity is observed in various complex systems, particularly with regard to information processing. In such systems, degeneracy distributions has been shown to be an important observation of the system. The entropy of this distribution, called the relevance, was shown [8] to be the relevant measure of complexity, and we showed that our simple problem reproduces many of the important qualitative phenomena observed in such systems. The filtering problem is therefore a highly tractable problem illuminating some of the key features of information processing in more complex systems. The extension of this problem to arbitrary graphs, makes the interactions between nodes more complex, and the analogy with the complex interactions of real complex systems more explicit.

Calculation of degeneracy distributions
The distributions shown in Figures 2-5, A1 and A2, and the numbers presented in the Tables 1, 2, and A1 and plotted in Figure 6 were experimentally obtained by considering all 2 n configurations of the n input binary variables x i individually. For a specified filter, or rule, we obtain the output variables y i corresponding to each input. From the frequency with which each output configuration appears, we build the degeneracy distribution.
For the sake of simplicity in the implementation of the computational experiments, we apply a basic indexing system to the output configurations. We start by initializing an array with 2 n positions populated with zeros, representing the frequency of observation of each output. Then, as we systematically run through all the possible inputs and calculate the corresponding outputs {y i }, we increment by 1 the value in position ∑ i y i 2 i of the array, where i = 0, 1, . . . , n − 1. In the end of this process, each position of the array contains the frequency of its corresponding output. This method is memory intensive, and in some cases uses much more memory than strictly necessary, since most of the positions of the frequency array will remain unchanged after initialization (corresponding to non-realizable, or unobserved, outputs). It is relatively simple to develop methods that do not require so much memory, however they would necessarily require more CPU resources, and have a larger time complexity. Notice that our method's time complexity is linear with the number of input configurations 2 n . In the case of rings, a much more efficient algorithm may be used, as described in Ref. [1].

Asymptotics of the degeneracy distribution on rings
Here we show how the asymptotic behaviour of the degeneracy distribution may be obtained. We focus on the particular family of filter patterns consisting of a chain of 1s with a 0 at each end. The shortest such pattern is 010. Each member of this set may be indexed by the length of the filter, w ≥ 3. Each output consists of isolated ones separated by strings of zeroes of various lengths. The filter pattern length w determines the minimum number of zeroes, w − 2, between each one.
For w = 3, chains of three or fewer zeroes in the output can only be produced in one way. Thus outputs containing only such chains of zeroes have degeneracy 1. Possible such output sequences can be built up out of three kinds of building blocks, 01, 001, and 0001, put together in a ring of length n. We can thus find the number of outputs of degeneracy 1, N (1, n), by counting all possible ways of building a ring of length n out of these blocks. We can do this recursively. For every configuration of length n − 2, we can obtain a valid configuration of length n by inserting the block 01 to the right, say, of a particular position i in the ring. This gives all the configurations of length n with the block 01 to the right of i. Doing the same with configurations of length n − 3 and blocks 001, we get all configurations with a block 001 to the right of the block of i. Finally, repeating the procedure for configurations of length n − 4 and blocks 0001, gives all configurations with a block 0001 to the right of the block of i. Since every block must be 01, 001, or 0001, the union of these three sets is the full set of configurations of degeneracy 1 in rings of n digits. Thus, we can write Starting from the first few values we could build up the sequence and find N (1, n) for any n. However it is not necessary to iterate through all values of n.
The explicit solution of this linear difference equation (4) can be written in terms of the roots, z i , of the characteristic equation z 4 = z 2 + z + 1: where the coefficients of the powers of the roots z i , all equal to one, are found form the initial condition, Eq. (5). The root z 1 ≡ z a = 1.46557... determines the large n asymptotics of N (1, n).
For w ≥ 4, it becomes possible for there to be chains of ones in the input that are shorter than that in the filter pattern. This means that only sequences of w − 2 or w − 1 zeroes in the output are not degenerate. Any sequence of w or more zeroes in the output can be produced in more than one way. One may therefore extend an input of degeneracy 1 only by inserting blocks of length w − 1 and w. Hence the recursion for N (1, n) becomes The corresponding characteristic equation is For large n, then, Where z a corresponds to the dominant solution of Eq. (8).
The total number of possible outputs may be derived in a similar way. The presence of a 1 at a given position in the output corresponds uniquely to w fixed digits at the same position in the input. Any degeneracy therefore arises in the parts of the input corresponding to strings of zeroes in the output. The total number of possible outputs, M(n), is then the number of ways of arranging isolated ones in a chain of length n, subject to this constraint. For every output of length n − 1, we can create an output of length n by inserting an additional 0. The same is not true for the digit 1, however. Any 1 in the output must be accompanied by a sequence of w − 2 zeroes. We can account for this condition precisely by inserting the sequence 10 w−2 into any valid output of length n − (w − 1) in a position immediately following a sequence of w − 2 zeroes (at least one such sequence must exist). Thus M(n) = M(n − 1) + M(n − w + 1), with initial conditions M(n = w) = 2, M(n < w) = 1. The elements of the sequence may be written in terms of the roots of the characteristic equation [12][13][14] z w−1 = z w−2 + 1.
Then z g corresponds to the largest root of this equation. We list values for various filter lengths (as well as for some other filter patterns) in Table 1.
The entire degeneracy distribution may be built up by considering chains of zeroes of different lengths in the output, and the number of different possible corresponding sections of the input. Let an output with m ≥ 1 ones contain m strings of zeroes with lengths 1 , 2 , ..., m . Then the degeneracy of this output equals Hered( ) is the number of input strings of length , having the first and last digits 0, that generate an output string of zeroes. This number plays an important role in our problem, similar to prime numbers in number theory, so we call thed( ) prime degeneracies. Suppose that the output contains µ strings of zeroes of length , = w − 2, w − 1, w, ..., where Then Eq. (11) may be rewritten for m ≥ 1.
The prime degeneraciesd( ) can be obtained recursively by taking into account three points: (i) Relevant input configurations of length are obtained by inserting 0 or 1 into each relevant configuration of length − 1 between the first and second positions of the sequence. (Recall that the first and last positions of the input sequence are fixed to 0.) (ii) Input strings of length beginning and/or ending with 01 w−2 0 are irrelevant, and so they should be removed from the set generated at the previous step. These configurations can be obtained by inserting the w − 1 digits 1 w−2 0 into each relevant input string of length − w + 1 between its first and second positions.
(iii) Finally, there exist input strings, compatible with the output string of zeroes, that cannot be obtained by inserting a single digit into relevant input strings of length − 1 between their first and second positions. These are the input strings of length beginning with 01 w−1 0 (i.e. a string of ones one digit longer than in the filter). These inputs can be obtained by inserting 1 w−1 0 into each relevant input string of length − w between their first and second positions.
Following these rules, the degeneracy of a string of zeroes at the output, prime degeneracyd( ), can be written recursively as a linear difference equation: with the initial conditiond(1) =d(2) = 1,d( ) = 2 −2 for 3 ≤ < w and d(w) = 2 w−2 − 1. The solution of Eq. (14) may be explicitly expressed in terms of the complex roots of the characteristic equation The largest real root of Eq. (15), z 1 , say, dominates for large , and we identify it as z d : The case of the periodic output of length n with all digits 0 has to be considered separately. Consider one digit of the input, at an arbitrary position. The number of input configurations where this digit is 0 and the resulting output has only zeroes is given byd(n + 1), because the periodicity of the input means that this digit 0 plays the role of both first and last digit of the configurations of a string of n + 1 digits. If the digit is 1, then the number of input configurations equals 1 + ∑ i =w−2 id(n − i), where the sum over i accounts for the configurations where the digit is in a group of i consecutive ones whose length is not w − 2, plus one configuration with all input digits equal to 1. Thus the degeneracy of the output with all zeroes is given by which is the largest possible degeneracy of an output of a given length. Applying the recursion relation for prime degeneraciesd, Eq. (14) to the terms on the right-hand side of Eq. (18) we find that the largest degeneracy d D (n) satisfies the same difference equation as Eq. (14) though with different initial condition with the initial condition d D (n) = 2 n for n < w, and d D (w) = 2 w − w. For large n, the solution is dominated by a single solution, Author Contributions: All authors contributed equally substantially in all parts and aspects of the work.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A Further results for the weak rule filter
Here we plot degeneracy distributions, cumulative distributions, and tabulate measures for the weak rule filter, WR, for comparison with those given for the strong rule, SR, in the main body of the text above.   Figure A2. Degeneracy distributions and cumulative degeneracy distributions for outputs of the SR filter on random regular graphs of degree 2 (a,b) 3 (c,d) and 4 (e,f). Table A1. Important values for the degeneracy distribution resulting from applying the weak rule (WR) filter to various graphs. The numbers n M(n), n d D (n) and n N(1, n) approximate z g , z d and z a respectively. We also give the relevance per node H[d]/n and the resolution per node H[y]/n. Numbers for RRG(q) and SW(q) were obtained by averaging over 10 random realizations.