Next Article in Journal
Lorentz Harmonics, Squeeze Harmonics and Their Physical Applications
Next Article in Special Issue
Long Time Behaviour on a Path Group of the Heat Semi-group Associated to a Bilaplacian
Previous Article in Journal
Positive Cosmological Constant and Quantum Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry in Complex Networks

Department of Fundamental Mathematics, Faculty of Sciences UNED, Senda del Rey 9, 28040 Madrid, Spain
Symmetry 2011, 3(1), 1-15; https://doi.org/10.3390/sym3010001
Submission received: 16 November 2010 / Revised: 4 January 2011 / Accepted: 7 January 2011 / Published: 10 January 2011
(This article belongs to the Special Issue Symmetry Measures on Complex Networks)

Abstract

:
In this paper, we analyze a few interrelated concepts about graphs, such as their degree, entropy, or their symmetry/asymmetry levels. These concepts prove useful in the study of different types of Systems, and particularly, in the analysis of Complex Networks. A System can be defined as any set of components functioning together as a whole. A systemic point of view allows us to isolate a part of the world, and so, we can focus on those aspects that interact more closely than others. Network Science analyzes the interconnections among diverse networks from different domains: physics, engineering, biology, semantics, and so on. Current developments in the quantitative analysis of Complex Networks, based on graph theory, have been rapidly translated to studies of brain network organization. The brain's systems have complex network features—such as the small-world topology, highly connected hubs and modularity. These networks are not random. The topology of many different networks shows striking similarities, such as the scale-free structure, with the degree distribution following a Power Law. How can very different systems have the same underlying topological features? Modeling and characterizing these networks, looking for their governing laws, are the current lines of research. So, we will dedicate this Special Issue paper to show measures of symmetry in Complex Networks, and highlight their close relation with measures of information and entropy.

Graphical Abstract

1. Some Previous Concepts

A graph [1] may be defined as a pair, G = (V, E), where V = V(G) is the node set, and E = E(G) is the edge set, i.e., the set of 2-element subsets of V.
Given an edge {i, j}∈E, we say that the nodes i and j are adjacent; and we denote i ~ j.
The neighborhood of i will be:
N(i) = {j ∈V: j ~ i}
And the degree of i can be expressed as:
deg(i) = d(i) = card {N(i)}
A graph, G, is finite, if the set of its nodes, V(G), is finite.
And it is locally finite, if all of its nodes have finite degrees.
Two very important results may be considered now:
Handshaking Lemma (or Theorem).
In any graph, the sum of the degrees of all nodes (or “total degree”)
is equal to twice the number of edges.
Degree Theorem.
In any graph there is an even number of nodes with an odd degree.
The adjacency matrix is a convenient representation of the interaction between nodes. Several Complex Networks measures can be defined over adjacency matrices; for instance: clustering coefficient (local or global), diameter, average degree of the network, and so on. All of them play a key role in network theory.
The distance between two nodes is defined as the length of the shortest path connecting them.
The diameter of a network is the maximal distance between any pair of their nodes.
The average path length is the average of the distance over all pairs of nodes. Thus, it determines the “size” of the network.
An automorphism of a graph, G, is any bijection:
a: V(G) → V(G)
that applies edges onto edges, and non-edges onto non-edges.
The set of all automorphisms of a graph, G, is denoted by Aut (G). It is the automorphism group of G. We will come back later to this concept.
Succinctly, the more symmetry a graph has the larger its automorphism group will be, and vice versa.

2. Symmetry and Networks

Pierre Curie stated [2]:
It is asymmetry that creates a phenomenon.
Paul Renaud generalized Curie’s idea and stated [3]:
If an ensemble of causes is invariant with respect to any transformation,
the ensemble of their effects is invariant with respect to the same transformation.
Joe Rosen has stated the Symmetry Principle as [4]:
The symmetry group of the cause is a subgroup of the symmetry group of the effect.
Or less precisely:
The effect is at least as symmetric as the cause (and might be greater).
Also from Joe Rosen is the quote:
Recognized causal relations in nature are expressed as laws.
Laws impose equivalence relations in the state sets of causes and of effects.
So,
Equivalent states of a cause are mapped to (i.e., correlated with)
equivalent states of its effect.
This is the Equivalence Principle.
Somewhat less precisely, this principle may be expressed as:
Equivalent causes are associated with equivalent effects.
Concerning the Equivalence Principle for Processes on isolated physical systems, we can say that:
Equivalent initial states must evolve into equivalent states
(while inequivalent states may evolve into equivalent states).
And the General Symmetry Evolution Principle:
The “initial” symmetry group is a subgroup of the “final” symmetry group.
This assertion can also be stated as:
For an isolated physical system the degree of symmetry
cannot decrease as the system evolves; instead,
it either remains constant or increases.
Finally, we have the Special Symmetry Evolution Principle:
As an isolated system evolves, the populations of the equivalence
classes of the sequence of states through which it passes cannot
decrease, but either remain constant or increase.
Equivalently,
The degree of symmetry of the state of an isolated system
cannot decrease during evolution; instead, it either
remains constant or increases.
As a further implication, Joe Rosen proposes this general theorem:
The degree of symmetry of a macrostate of stable equilibrium must be relatively high.
In the following, we will draw on the concepts and intuitions by Prof. Rosen, summarized in his paper “The Symmetry Principle” [4].
According to the traditional viewpoint, higher symmetry is related to higher order, less entropy and less stability.
In Prigogine’s theory, symmetry has been regarded as order, or reduction of entropy. But this idea is incorrect. Rosen’s Principle of Symmetry is the opposite of such theory.
Shu-Kun Lin [5] has proved both: the Symmetry Principle, around a continuous higher similarity-higher entropy relation; and the Rosen’s Symmetry Principle, around a higher symmetry-higher stability relation. He proposed that entropy is the degree of symmetry and information is the degree of asymmetry of a structure.
According to Shu-Kun Lin [5], “symmetry is in principle ugly, because it is related to entropy and information loss”
With the motto:
Ugly Symmetry-Beautiful Diversity
This contradicts the more usual and commonplace vision of symmetry as a concept equivalent to desirable beauty, proportion and harmony. This can be surprising, but Shu-Kun Lin’s arguments are really strong and convincing: Symmetric structure is stable but not necessarily beautiful. All spontaneous processes lead to the highest symmetry, which is the equilibrium or a state of “death”.
“Life is beautiful but full of asymmetry”
It concludes [6] that
Beauty = Stability + Information
Intuitively, symmetry, like perfection or beauty, up to a certain level, is precious, but above that—apart from inexistent in the real world—would mean an end to the human thing, which is by nature- and fortunately- imperfect.

3. Symmetry as Invariance

Symmetry [1,6,7] in a system means invariance of its elements under a group of transformations, i.e., the mathematical definition of symmetry of a graph is the set of transformations that leave the properties of the graph unchanged. When we focus on Network Structures, it means invariance of adjacency of nodes under the permutations on the node set [8,11].
A graph isomorphism is an equivalence, or equality, as relation on the set of graphs. Therefore, it partitions the class of all graphs into equivalence classes. The underlying idea of isomorphism is that some objects have the same structure, if we omit some individual characteristics of their components. A set of graphs isomorphic to each other is called an isomorphism class of graphs [1,8,10].
An automorphism of a graph, G = (V, E), is an isomorphism from G onto itself. The family of all automorphisms of a graph, G, is a permutation group on V (G). The inner operation of such a group is the composition of permutations. Its name is very well-known, the Automorphism Group of G, denoted by Aut (G). And conversely, all groups may be represented as the automorphism group of some connected graph.
The automorphism group is an algebraic invariant of a graph. So, we can say that an automorphism of a graph is a form of symmetry in which the graph is mapped onto itself while preserving the edge-node connectivity. Such an automorphic tool may be applied both on Directed Graphs (DGs), and on Undirected Graphs (UGs), or Mixed Graphs.
Graphs are discrete mathematical constructs. Also, they are topological objects, not geometrical entities. And they may exhibit symmetries under transformations that are not node permutations: e.g., by scale invariance on fractals [31].
Another interesting concept in mathematics, the word “genus”, has different, but strongly related, meanings. So, in Topology it depends on whether we consider orientable or non-orientable. In the case of connected and orientable surfaces, it is an integer that represents the maximum number of cuttings, along closed simple curves, without rendering the resultant manifold disconnected. Visually, we can imagine that it is the number of “handles” on the manifold. Usually, it is denoted by the letter g.
It is also definable through the Euler number, or Euler Characteristic, denoted χ. Such a relationship will be expressed, for closed surfaces, by χ = 2 − 2g. When the surface has b boundary components, this equation transforms to χ = 2 − 2g − b, which obviously generalizes the above equation. For example, a sphere, an annulus, or a disc have genus g = 0. Instead of this, a torus has g = 1.
In the case of non-orientable surfaces, the genus of a closed and connected surface will be a positive integer, representing the number of cross-caps attached to a sphere.
Recall that a cross-cap is a two-dimensional surface that is topologically equivalent to a Möbius string.
As in the precedent analysis, it can be expressed in terms of the Euler characteristic, by χ = 2 – 2 k, where k is the non-orientable genus. For example, a projective plane has a non-orientable genus k = 1. And a Klein bottle has a non-orientable genus k = 2.
Turning to graphs [1], the corresponding genus will be the minimal integer, n, such that the graph can be drawn without crossing itself on a sphere with n handles. So, a planar graph has genus n = 0, because it can be drawn on a sphere without self-crossing.
In the non-orientable case, the genus will be the minimal integer, n, such that the graph can be drawn without crossing itself on a sphere with n cross-caps.
Moving on to topological graph theory, we will define as genus of a group, G, the minimum genus of any of the undirected and connected Cayley graphs for G.
From the viewpoint of Computational Complexity, the problem of “graph genus” is NP-complete. Recall that a problem, L, is NP-complete if it has two properties: It is in the set of NP (nondeterministic polynomial time) problems, i.e., any given solution to L can be verified quickly (in polynomial time); and it is also in the set of NP-hard problems, i.e., any NP problem can be converted into L by a transformation of the inputs in polynomial time.
A graph invariant, or graph property, is a property that depends only on the abstract structure of the graph, not on its representations, such as a particular labeling or drawing of the graph. So, we may define a graph property as any property that is preserved under all possible isomorphisms of the graph. Therefore, it is a property of the graph itself, independent of the representation of the graph.
The semantic difference between invariant and property also consists in its quantitative or qualitative character. For instance, when we say that “the graph has no directed edges”, this is a property, because it is a qualitative statement. While when we say “the number of nodes of degree two in such a graph”, this is an invariant, because it is a quantitative statement.
From a strictly mathematical viewpoint, a graph property can be interpreted as a class of graphs, composed by the graphs that have in common the accomplishment of some conditions. Hence, a graph property can also be defined as a function whose domain would be the set of graphs, and which range would be the bi-valued set, {true, false}; the value of the property depending on whether it is verified or violated for the graph.
A graph property is called hereditary, if it is inherited by its induced subgraphs.
And a graph property will be additive, if it is closed under disjoint union.
For example, the property of a graph being planar is both additive and hereditary. And the property of being connected is neither.
The computation of certain graph invariants is very useful to discriminate whether two graphs are isomorphic or non-isomorphic. For any particular invariant, two graphs with different values cannot be isomorphic. However, two graphs with the same invariant value may or may not be isomorphic.
It is possible to prove that every group is the automorphism group of a graph.
If the group is finite, the graph may be taken to be finite.
G. Pólya observed that not every group is the automorphism group of a tree.
Many reasons are behind the current popularity of Complex Networks [13]. To cite but a few, their generality and flexibility for representing any natural structure, including those structures that reveal dynamical changes of topology [11,14,20,22].
Before turning to Complex Networks, it is very convenient to introduce some concepts which are useful in understanding Networks, as measures of their principal characteristics [1,11].
The characteristic path length measures the distance from every node to every other node. It is calculated by the median of the shortest paths from each node to every other node. So, as a derived measure, the diameter gives us the maximum possible distance between all pairs of reachable nodes.
Another commonly used value is the Clustering Coefficient. It is the mean of the clustering indices of all the nodes in the graph. It is usually denoted C. It tells us how well connected the neighborhood of the node is. So, it is the answer to this question: How close is the neighborhood of a node to be a clique (i.e., a complete subgraph). Finding C, we look for the neighbors of the corresponding node, and then find the number of existing edges between them. The ratio of the number of existing edges to the number of all possible edges is the clustering index of the node.
If the neighborhood is fully connected, then the clustering coefficient must be equal to one, C = 1. In the opposite situation, a value of C = 0 signifies that the neighborhood is fully disconnected. And any intermediate value is a measure of the graph’s degree of connectedness. Values close to zero mean that there are hardly any connections in the neighborhood.
This measure has been used to summarize features of undirected and unweighted networks in Complexity Science.
An interesting type of graph is Regular Networks, where each node is connected to all other nodes; i.e., they are fully connected. Because of such a type of structure, they have the lowest path length (L), and the lowest diameter (D), being L = D = 1. Also, they have the highest clustering coefficient (C). So, it holds that C = 1.
Furthermore, they have highest possible number of edges, given by
Card (E) = n (n -1)/2 ~ n²

4. Random Graphs

In Random Graphs (RGs), each pair of nodes is connected with probability p. They have a low average path length [8,17], following that:
L ~ (ln n) / n<k> ~ ln n, for n ≫ 1
Therefore, the total network may be covered in <k> steps, from which
n ~ <k>L
Moreover, Random Graphs possess a low clustering coefficient, when the graph is sparse. Thus,
C = p = <k>/n ≪ 1
The reason is that the probability of each pair of neighboring nodes to be connected is precisely equal to p.
The Small-World effect is observed on a network when it has a low average path length:
L << n, for n >> 1
Recall [15,21,24,25] the now very famous “six degrees of separation”, which also may be called “small-world phenomenon”. The subjacent idea is that two arbitrarily selected people may be connected by only six degrees of separation, or six handshakes (in average, and it is not much larger than this value). Therefore, the diameter of the corresponding graph is not much larger than six.
The usual example is social connections. So, the Small-World property [11,15] can be interpreted as that despite its large size (of the corresponding graph), the shortest path between two nodes is small, as for example on the WWW, or on the Internet.

5. Self-Similarity

Self-similarity on a network [11] indicates that it is approximately similar to any part of itself, and therefore, it is fractal. In many cases, real networks possess all these properties, i.e., they are Fractal, Small-World, and Scale-Free.
Fractal dimensions describe self-similarity of diverse phenomena: Images, temporal signals, etc. Such fractal dimension gives us an indication of how completely a fractal appears to fill the space, as one zooms down to finer and finer scales. It is so a statistical measure.
The most important of such measures are Rényi dimension, Hausdorff dimension, and Packing dimension.
A Fuzzy set approach also may produce some very consistent models [26,27,28].

6. Small-World Model

The Watts-Strogatz Small-World Model, proposed in 1998, is a hybrid case between a Random Graph and a Regular Lattice [14,20,22]. So, Small-World models share with Random Graphs some common features, such as: The Poisson or Binomial degree distribution, near to Uniform degree distribution; network size: It does not grow; each node has approximately the same number of edges.
Therefore, it shows a homogeneous nature. Because of their ease of implementation, the more usual procedures to compute such measures are correlation dimension and box counting.
Watts-Strogatzmodels show the low average path length typical of Random Graphs,
L ∼ ln n, for n >> 1
And also such models give us the usual high clustering coefficient of Regular Lattices, being
C ≈ 0.75, for k >> 1
In consequence, WS-models have a small-world structure, being well clustered. The Random Graphs coincide on the small-world structure, but they are poorly clustered. This model (WS) has a peak degree distribution, of Poisson type.

7. Scale-Free Networks

With reference to the last analyzed model [16,17,23,24], called Scale-Free Network, this appears when the degree distribution follows a Power-Law:
P (k) ~ k
In such a case, there exist a small number of highly connected nodes, called Hubs, which are the tail of the distribution.
On the other hand, the great majority of the sets of their nodes have few connections, representing the head of such distribution.
Such a model was introduced [8,14,16] by Albert-Laszló Barabási and Réka Albert, in 1999.
Some of their essential features are these: non-homogeneous nature, in the sense that some (few) nodes have many edges from them, and the remaining nodes only have very few edges, or links; as related to the network size, it continuously grows; and regarding to the connectivity, it obeys a Power-Law distribution.
Many massive graphs, such as the WWW graph, share certain characteristics, described as such aforementioned Power-Law.
Bela Bollobás and Oliver Riordan [9,11] consider a Random Graph process in which nodes are added to the graph one at a time, and joined to a fixed number of earlier nodes, chosen with probability proportional to their degree. After n steps, the resulting graph has diameter approximately equal to log n. This affirmation is true for n = 1. But for n ≥ 2, the diameter value would show asymptotical convergence to (logn)/log(log n).
Another very interesting mechanism is the so-called Preferential Attachment process (PA). This would be any class of processes in which some quantity is distributed among a number of sets (for instance, objects or individuals), according to how much they already have, so that intuitively “the rich get richer” (the more interrelated get more new connections than those who are not).
The principal scientific interest in PA is that they may produce interesting power law distributions.
A very notable example of a Scale-Free Network is the World Wide Web (WWW). As we know [23,25], it is a collection of many, possibly very different, sub-networks. Related to the Web graph characteristics, we notice the Scale Invariance as being very important [10].

8. Diameter of the Web

Another interesting feature is the possibility of obtaining a measurement of the World Wide Web, its diameter, i.e., the shortest distance between any pair of nodes into the system, or at least some adequate bound, either a mean value [18,19], etc.
The WWW representation is made by a very large digraph, whose nodes are documents, and whose edges are links (URLs), pointing from one document to another [14,20,22].
Réka Albert et al. [18] found that the average of the shortest path between two nodes will be
<d> = 0.35 + 2.06 log N
where N is the number of nodes in the Random Graph considered. This shows that the WWW is a Small-World network.
In particular, if we take
N = 8 × 10
we will obtain
<dWeb> = 18.59
This important result signifies that two randomly chosen nodes (documents), on the graph which represent the WWW, are only on average 19 clicks (or steps into the WWW graph) from each other.
For a given value of the number of nodes, N, the distribution associated to d is of Gaussian (Normal) type. It is also very remarkable the logarithmic dependence of such diameter on the value of N. In this sense, R. Albert et al. indicate that the future evaluation of <d>, with the increasing of the WWW, would change from 19 to only 21.

9. Community Structure

The Community Structure can also be called Modularity. It is a very frequent characteristic in many real networks. Therefore, it has become a key problem in the study of networked systems [11,12,21].
Giving out its deterministic definition is nontrivial because of the complexity of networks. The concept of modularity (Q) can be used as a valid measure for community structure.
Some current models have proposed to capture the basic topological evolution of Complex Networks by the hypothesis that highly connected nodes increase their connectivity faster than their less connected peers, a phenomenon denoted as PA (preferential attachment). So, we can find a class of models that view networks as evolving dynamical systems, rather than static graphs.
Most evolving network models are based on two essential hypotheses, growth and preferential attachment.
Growth suggests that networks continuously expand through the addition of new nodes and links between the nodes. And preferential attachment states that the rate at which a node with k links acquires new links will be a monotonically increasing function of k.
We can consider an undirected n-graph, or network, G, with adjacency matrix denoted as A = (aij), where aij = 1, if nodes i and j are connected; otherwise, aij = 0. Then, the modularity function, denoted by Q, will be defined as:
Q (Pk) = Σ [{L(Vj, Vj)/L(V, V)} - {L(Vj, V)/L(V, V)}2]
where Pk is a partition of the nodes into k groups, and:
L (V´, V´´) = Σi∈V´,i∈V´´ aij
The modularity function, Q, provides a way to determine whether a partition will be valid to decipher the community structure in a network. Maximization of such modularity function, over all the possible partitions of a network, is indeed a highly effective method.
An important case in community detection is that some nodes may not belong to a single community, and then placing them into more than one group may be much more reasonable. Such nodes can provide a “fuzzy” categorization [25], and hence, they may take a special role, such as signal transduction in biological networks.

10. Fuzzy Symmetry

Recall that according to Klaus Mainzer, “Symmetry and Complexity determine the spirit of nonlinear science”. And “the universal evolution is caused by symmetry break, generating diversity, increasing complexity and energy” [26].
Graph theory has emerged as a primary tool for detecting numerous hidden structures in various information networks, including Internet graphs, social networks, biological networks, or more generally, any graph representing relations in massive data sets. Analyzing these structures is very useful to introduce concepts such as Graph Entropy and Graph Symmetry.
We consider a function on a graph, G = (V, E), with P a probability distribution on its node set, V. The mathematical construct called Graph Entropy will be denoted by G, E. It will be defined as
H (G, P) = min ∑ pi log pi
Observe that such a function will be convex. It tends to +∞ on the boundary of the non-negative orthant of Rn. And monotonically to −∞ along rays from the origin. So, such a minimum is always achieved and it will be finite.
The entropy of a system represents the amount of uncertainty one observer has about the state of the system. The simplest example of a system will be a random variable, which can be shown by a node into the graph, where their edges represent the mutual relationship between them. Information measures the amount of correlation between two systems, and it reduces to a mere difference in entropies. So, the entropy of a graph is a measure of graph structure, or lack of it. Therefore, it may be interpreted as the amount of Information, or the degree of “surprise”, communicated by a message. And as the basic unit of Information is the bit, entropy also may be viewed as the number of bits of “randomness” in the graph, verifying that the higher the entropy, the more random is the graph.
It is possible to introduce some new asymmetry and symmetry level measures as by [27,28]. Note that our results may also be applied to some different classes of spaces.
Recall some very useful definitions from Fuzzy Measure Theory.
Definition 1:
Let U be the universe of discourse, with ℘ a σ-algebra on U. Then, given a function
m: ℘→[0,1]
we describe m as a Fuzzy Measure, if it verifies:
I)
m (∅) = 0;
II)
m (U) =1;
III)
If A, B ∈ ℘, with AB, then m (A) ≤ m (B) [monotonicity].
When we take the Entropy concept, we attempt to measure the fuzziness, i.e., the degree of being fuzzy for each element in ℘.
Definition 2:
The Entropy measure can be designed as the function.
H: ℘ → [0,1]
verifying:
I)
If A is a crisp set, then H (A) = 0;
II)
If H (x) = 1/2, for each xA, then H (A) is maximal (total uncertainty);
III)
If A is less fuzzified than B, it holds that H (A) ≤ H (B);
IV)
H (A) = H (U\A).
Definition 3:
The Specificity Measure will be introduced as a measure of the tranquility when we take decisions. Such Specificity Measure (denoted by Sp) will be a function:
Sp: [0,1]U → [0,1]
where
I)
Sp (∅) = 0;
II)
Sp (k) = 1 if and only if k is a unitary set (singleton);
III)
If V and W are normal fuzzy sets in U, with VW, then Sp (V) ≥ Sp (W);
Note. [0,1]U denotes the class of fuzzy sets in U; Let (E, d) be a fuzzy metric space.
We proceed to define our new fuzzy measures. Such functions might be defined as some of the type
{Li}i∈{s,a}
where s denotes symmetry, and a denotes asymmetry.
Suppose that from here we denote by c (A) the cardinal of a fuzzy set, A. We denote by H (A) its entropy measure, and by Sp (A) its corresponding specificity measure.
Theorem 1.
Let (E, d) be a fuzzy metric space, with A as a subset of E, and let H and Sp be both above fuzzy measures defined on (E, d). Then, the first function, operating on A, may be defined as
Ls (A) = Sp(A) ((1-c(A))/(1+c(A)) + (1/(1+H(A))
and will be also a fuzzy measure. This measure is called Symmetry Level Function.
Theorem 2.
Let (E, d) be a fuzzy metric space, with A as any subset of E, and let H and Sp be both above fuzzy measures defined on (E, d). Then, the function
La (A ) = 1 - {Sp(A) ((1-c(A))/(1+c(A))+(1/(1+H(A))}
This measure is called Asymmetry Level Function.
Corollary 1.
In the same precedent hypotheses, the Symmetry Level Function is a Normal Fuzzy Measure.
Corollary 2.
Also, in such conditions the Asymmetry Level Function will be a Normal Fuzzy Measure.
Recall that the values of a fuzzy measure, Sp, are decreasing when the size of the considered set is increasing. And also that the Range of the Specificity Measure, Sp, will be [0,1].

11. New Lines of Research

An important fact, but commonly forgotten, is that an element can belong to more than a fuzzy set at the same time. This admits new generalizations on the theoretical basis of important topics [30], as may be Clustering and Community structures.
And recall the line which was open by the Three Laws of Similarity of Shu-Kun Lin [30], according to which, in parallel to the first and the second laws of thermodynamics, we have:
i)
The first law of information theory. The logarithmic function L = ln w, or the sum of entropy and information, L = S + I, of an isolated system remains unchanged, where S denotes the entropy and I the information content of the system.
ii)
The second law of information theory. Information of an isolated system decreases to a minimum at equilibrium.
iii)
The third law of information theory. For a perfect crystal (at zero absolute thermodynamic temperature), the information is zero and the static entropy is at the maximum. Or in a more general form, “for a perfect symmetric static structure, the information is zero and the static entropy is the maximum”.
Analyzing the Gibbs’ paradox, Dr. Lin arrives to its well-known:
iv)
Similarity principle. The higher the similarity among the components is, the higher the value of entropy will be and the higher the stability will be.
By these three laws and such principle, Dr. Lin has clarified the relation of symmetry to several other concepts, as higher symmetry, higher similarity, higher entropy, less information and less diversity, related to higher stability. Upon these deep foundations, the tracks of mutual relationships between such fuzzy measures can be traced: as it is the case with Symmetry, Entropy, Similarity, and so on, which can lead in the future to advances for innovative fields connected to them.
The paper of Prof. Joel Ratsaby is also very inspiring [31]. He introduces an algorithmic complexity framework for representing Lin’s concepts of static entropy, stability and their connection to the second law of thermodynamic. Instead of static entropy, according to Ratsaby, the Kolmogorov complexity of a static structure may be the proper measure of disorder. Consider one static structure in a surrounding perfectly-random universe in which it acts as an interfering entity which introduces a local disruption of randomness. This is modeled by a selection rule, R. So, we may clearly explain why more complex static structures are less stable. To continue in this line of promising investigation can be very interesting in the future.
According to Garlaschelli et al. [32], “while special types of symmetries (e.g., automorphisms) are studied in detail within discrete mathematics for particular classes of deterministic graphs, the analysis of more general symmetries in real Complex Networks is far less developed”.
They argued that real networks, as any entity characterized by imperfections or errors, necessarily require a stochastic notion of invariance. So, they propose a definition of stochastic symmetry based on graph ensembles.
But we suggest that in addition, they can and must try theoretical approximations from the field of fuzzy measures, since it is those of symmetry and entropy, really interrelated between them. Thus, to regulate mathematically, by modulating, the diverse degrees with which one will find these types of characteristics in reality, when we consider networks and systems.

12. Conclusions

Our initial purpose was to provide a comprehensive vision on principal aspects, and essential properties, of Complex Networks, from a new Mathematical Analysis point of view, and in particular to show the promise of the new functions of Symmetry/Asymmetry Levels.
The essential idea was to obtain an as wide as possible perspective of certain aspects of Complex Networks, as well as of the fuzzy measures when they are acting on them. With the new results and the pointed lines of advance, we think that it will be possible to penetrate into the aforementioned problems, to come to a deeper comprehension of the symmetry, of the entropy and of other similar fuzzy measures, interesting not only from a theoretical viewpoint, but promising for many scientific applications.

Acknowledgments

I wish to express my gratitude to Joel Ratsaby, from Ariel University Center (Israel), who proposed me to take charge of this Special Issue of the journal Symmetry, to which this paper belongs. Likewise, to Shu-Kun Lin, who asked for it later, as well as to the support received from the Editorial Board for this publication, and very especially to Cathy Wang, for her assistance. Also I want to be grateful for their wise advices to the anonymous referees of my paper.

References and Notes

  1. Bornholdt, S.; Schuster, H.G. Handbook of Graphs and Networks: From the Genome to the Internet; Wiley: Weinheim, Germany, 2003. [Google Scholar]
  2. Curie, P. Symmetry in Physics; Rosen, J., Ed.; American Society of Physics Teachers Series 3; American Institute of Physics: Melville, NY, USA, 1982; pp. 17–25. [Google Scholar]
  3. Renaud, P. Symmetry in Physics; Rosen, J., Ed.; American Society of Physics Teachers; American Institute of Physics: Melville, NY, USA, 1982; p. 26. [Google Scholar]
  4. Rosen, J. The Symmetry Principle. In Entropy Journal, Symmetry in Science: An Introduction to the General Theory; Springer-Verlag: New York, NY, USA, 1995; pp. 308–314. [Google Scholar]
  5. Lin, S.K. Correlation of Entropy with Similarity and Symmetry. J. Chem. Inf. Comput. Sci. 1996, 36, 367–376. [Google Scholar] [CrossRef]
  6. Lin, S.K. Ugly Symmetry. In Division of Organic Chemistry, Proceedings of Tetrahedral Carbon's 125th Anniversary Symposium—The 218th ACS National Meeting; New Orleans, LA, USA, 1999. [Google Scholar]
  7. Weyl, H. Symmetry; Princeton University Press: Princeton, NJ, USA, 1983. [Google Scholar]
  8. Barabási, A.-L. Linked: How Everything is Connected to Everything Else; Plume Publisher: New York, NY, USA, 2004; http://www.nd.edu/~alb/(updated 06/04/02, Department of Physics, University of Notre Dame, USA).
  9. Bollobás, B. Cambridge Studies in Advanced Mathematics 73. In Random Graphs; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  10. Bollobás, B. Modern Graph Theory; Springer Verlag: Berlin, Germany, 1998. [Google Scholar]
  11. Bollobás, B. Handbook of Large-Scale Random Networks; Springer Verlag: Berlin, Germany, 2009. [Google Scholar]
  12. Newman, M. The structure and function of Complex Networks. SIAM Rev. 2003, 45, 167–256. [Google Scholar] [CrossRef]
  13. Newman, M. The structure and Dynamics of Complex Networks; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  14. Albert, R.; Barabási, A.L. Statistical Mechanics of Complex Networks. Rev. Mod. Phys. 2002, 74, 47–72. [Google Scholar] [CrossRef]
  15. Watts, D.J. Six Degrees: The Science of a Connected Age; W. W. Norton and Company: New York, NY, USA, 2003. [Google Scholar]
  16. Barabási, A.L.; Bonabeau, R. Scale-Free Networks. Scient. Am. 2003, 288, 50–59. [Google Scholar]
  17. Calderelli, G. Scale-Free Networks; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  18. Bollobás, B. The diameter of a Scale-Free Random Graph. In Combinatorica; Springer Verlag: Berlin, Germany, 2004. [Google Scholar]
  19. Albert, R.; Jeong, H.; Barabási, A.L. Diameter of the World-Wide Web. Nature 1999, 401, 129–130. [Google Scholar] [CrossRef]
  20. Barrat, A. Dynamical processes in Complex Networks; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  21. Strogatz, S.H. Exploring Complex Networks. Nature 2001, 410, 268–276. [Google Scholar] [CrossRef] [PubMed]
  22. Bocaletti, S. Complex Networks: Structure and Dynamics. Phys. Rep. 2006, 424, 175–308. [Google Scholar] [CrossRef]
  23. Dorogotsev, S.N.; Mendes, J.F.F. Evolution of Networks. Adv. Phys. 2002, 51, 1079–1094. [Google Scholar] [CrossRef]
  24. Dorogotsev, S.N.; Mendes, J.F.F. Evolution of Networks: From Biological Networks to the Internet and WWW; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  25. Dorogotsev, S.N.; Goltsev, A.V.; Mendes, J.F.F. Critical phenomena in Complex Networks. Rev. Mod. Phys. 2008, 80, 1275–1284. [Google Scholar] [CrossRef]
  26. Mainzer, K. Symmetry and Complexity—the Spirit and Beauty of Nonlinear Science; World Scientific Publ. Company: California, NJ, USA, 2005. [Google Scholar]
  27. Garrido, A. Asymmetry and Symmetry Level Measures. Symmetry 2010, 2, 707–721. [Google Scholar] [CrossRef]
  28. Garrido, A. Asymmetry level as a fuzzy measure. Acta Univ. Apulensis Math. Inf. 2009, 18, 11–18. [Google Scholar]
  29. Garrido, A. Entropy, Genus and Symmetry on Networks. ROMAI J. 2010, 6, 23–38. [Google Scholar]
  30. Lin, S.K. The Nature of the Chemical Process. 1. Symmetry Evolution—Revised Information Theory, Similarity Principle and Ugly Symmetry. Int. J. Mol. Sci. 2001, 2, 10–39. [Google Scholar] [CrossRef]
  31. Ratsaby, J. An algorithmic complexity interpretation of Lin’s third law of information theory. Entropy J. 2008, 10, 6–14. [Google Scholar] [CrossRef]
  32. Garlaschelli, D.; Ruzzenenti, F.; Barossi, R. Complex Networks and Symmetry I: A Review. Symmetry 2010, 2, 1683–1709. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Garrido, A. Symmetry in Complex Networks. Symmetry 2011, 3, 1-15. https://doi.org/10.3390/sym3010001

AMA Style

Garrido A. Symmetry in Complex Networks. Symmetry. 2011; 3(1):1-15. https://doi.org/10.3390/sym3010001

Chicago/Turabian Style

Garrido, Angel. 2011. "Symmetry in Complex Networks" Symmetry 3, no. 1: 1-15. https://doi.org/10.3390/sym3010001

Article Metrics

Back to TopTop