1. Symmetry Measures
Symmetry is a fundamental concept and also a useful tool in almost every scientific and artistic field [
1]. For instance, it is a cornerstone not only of Modern Physics, but also of apparently less related areas such as Music. In fact, these two particular fields do intersect in the Physics of Sound.
If we assume that physical systems have a high degree of (at least approximate) symmetry, then it is possible to simplify the equations describing them. Also, in the search for a unified description of elementary particles, the clue is in the equivalence between the valid theory and the most symmetrical among the possible theories.
Usually, Symmetry, and in parallel, Asymmetry, are considered as two faces of the same object [
2]. So, the object is either totally symmetric, or totally asymmetric, relative to a pattern. No intermediate situations of partial symmetry or partial asymmetry are considered. But this dichotomical classification is too simple, and lacks of a necessary and realistic gradation. So, defining symmetry as a continuous feature, we get at a more complex definition, but more useful in many essential fields, as Computer Vision. Its interest is therefore not only theoretical, but also applied in Artificial Intelligence.
When we consider an isolated physical system, its symmetry properties are closely related to the conservation laws which characterize such a system.
Emmy Noether gives a clear description of this relation, in two theorems, establishing that (first theorem) “each symmetry of a physical system implies that some physical property of that system is conserved”. And conversely (in the second theorem), “each conserved quantity (into a system) has a corresponding symmetry”.
Naturally, a definition of symmetry as a continuous feature is much more complex than the discrete one. We may attempt three ways for climbing the summit of the symmetry/asymmetry measure. First, the geometrical characterization of Symmetry through group theory tools [
2]. Second, by statistical machinery, through distribution or density functions, or also by characteristic functions for instance, measuring the symmetry degree and the skewness of different probability distributions. And third, by applying Measure Theory, in its more recent fuzzy version [
3,
4,
5,
6,
7,
8]; in this way, we may quantify the distance departure from Symmetry in shape, as a continuous feature, instead of a discrete feature. Hence, we look not only to neither full coincidence nor absolute difference, but for gradual coincidence of the shape with its Symmetrical shape.
Concerning the concept of Symmetry, its applications and consequences, the research by Shu-Kun Lin [
17,
18,
19,
20,
21] is very remarkable. Quoting his work,
- -
“symmetry is in principle ugly, because it is related to entropy and information loss”,
- -
“the highest level of symmetry is total chaos”,
- -
“a gas has more symmetry than a liquid and a liquid more symmetry than a solid”.
Although being reasonable affirmations, those are not the kind appearing in most usual textbooks.
Shu-Kun Lin also proposes the
Similarity Principle, according to which: “If all the other conditions remain constant, the higher the similarity among the components of an ensemble (or a considered system) is, the higher value of entropy of the mixture (for fluid phases) or the assemblage (for a static structure or a system of solid phase) or any other structure (such as an ensemble of quantum states in quantum mechanics) will be, the more stable the mixture or the assemblage will be, and the more spontaneous the process leading to such a mixture or assemblage will be.” This is a proposal [
9] very useful for characterizing structural stability and process spontaneity.
Shu-Kun Lin also defines the Information as the amount of data after data compression. Because the more usual definition on entropy as a measure of information may be confuse.
Lin also proposes three Information Theory Laws based on the mutual relationship between entropy and information measures,
First Law: the total amount of data of an isolated system remains unchanged.
Second Law: the information of an isolated system decreases to a minimum at equilibrium.
Third Law: for a solid structure of perfect symmetry (e. g., a perfect crystal), the information is null, and the entropy is at the maximum.
2. Symmetry and Causality
David Kellogg Lewis (1941–2001) was a prominent mathematical logician and analytical philosopher. He worked in a number of fields: Modal Logic, the plausibility of multiplicity of possible worlds and, to greater success, developing the
Counterfactual Theory [
10].
Counterfactual Theory has an early origin in the work of David Hume (1711–1776), who said in 1748: “We may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by an object similar to the second. Or, in other words, where, if the first object had not been, the second never had existed” [
11].
The first sentence reflects the
Regularity Criteria and the second the known
Criteria of CounterfactualsEquivalent to
Such initial Counterfactual Theory was taken up again by John Stuart Mill: “… we conclude [that] because a resembles to b, on one or more properties, that it does so in a certain other property” (1874).
But criticism appeared against the explanation given by Lewis, as in Horwich [
12], and Hausman [
13]. Recall now the
Properties of Causality Relation, or simply
Causation. Suppose A, B and C are three different events in a world, W. We have
Transitivity: If A is cause of B, and B is cause of C, then A is cause of C.
Asymmetry or Anti-Symmetry: If A is cause of B, then B cannot be the cause of A.
Irreflexivity or Anti-Reflexivity: A cannot possibly be (ever) its own cause.
One of the main arguments of the critics is based on supposing that Lewis’ explanation suffers from a certain psychological implausibility. This can be found in Horwich [
12].
D. K. Lewis [
10] admits that this asymmetry is possibly a contingent characteristic of the actual world, not present in other worlds. So, in a world populated by only one atom such asymmetry on the over-determination does not hold. For this reason, there exists a possible discontinuity problem in the boundary. Because if we consider a contractive sequence of sub-worlds, each of them asymmetric, converging to the monatomic world, denoted by W, where asymmetry does hold, we would have a possible weakness in the theory.
3. Our Geometrical Construction
Mathematically [
7], the situation (relative to the symmetric character) should depart from a contractive set, or decreasing collection, of sub-worlds, each one inserted into the precedent, where each one but the last, shows asymmetries, whereas in the limit, finally, the symmetry appears. To solve the problem, we can admit the symmetry is a discontinuous function, and so we see the subsequent tendency
Or we may assign a certain value, L
s, as a level of symmetry or asymmetry with a definition suggested by the belonging degree of elements to fuzzy sets; or as a level of satisfaction of some condition. So defined, in the limit case, it is possible to obtain a state of complete symmetry,
For instance, with the contractive condition taken from the concept of cardinality, here denoted by
c,
Also, we can suppose that each world has a cardinal number one less than its precedent world. Once classified in decreasing order, reaching some degree of homogeneity among its elements, it is possible to introduce the function “symmetry level”, (or asymmetry level). They will be denoted as Ls and La.
With an increasing sequence of values in such succession, dependent on the cardinality of the selected world at each step, converging to one from the left. So, in the limit situation is closer to A in every step. Hence, {An}n∈N →A.
Frequently, the causal relation is taken to be intrinsically asymmetric, because in the world of our experience it is so. However, the fundamental physical laws are symmetric. Any other temporal asymmetries are accounted for in terms of the Principle of the Common Cause (PCC), due to Hans Reichenbach, which says: “If an improbable coincidence has occurred, there exists a common cause”.
Through such Principle, it is possible to explain the arrows (of entropy, experience and so on) by Causal Theory. And at the same time, the PCC results as Corollary of the Probabilistic Theory of Causation.
The Entropic Theory works in two phases: first, reducing any other arrow (causation, radiation, experience…) to the entropic arrow; and second, explaining entropic asymmetry in terms of boundary conditions on the universe.
Leyton [
14,
15] investigated the psychological relationships between shape and time, arguing that shape is used, by mind, to recover the past, and it forms a basis of the memory. And then,
symmetry is the means by which shape is transformed into memory.
Symmetry is an intrinsic property, which causes it to remain invariant under some classes of transformations, as Rotation, Reflection, Inversion or more abstract mathematical operations. For instance, it can be represented in the form of coefficients of equations.
We start from an object, shape or form F, where generally we refer to its boundary, when it is a 3-D construct. We know that symmetry is never perfect in the real world. Therefore, perfect symmetry is an imaginary, an ideal reference, only a product of mathematically creative minds [
16,
17]. So, we are considering the actual symmetry, G
a, corresponding to an imperfect form, F
a, as opposed to ideal symmetry, G
i, associated to its “perfect” form, F
i. In fact, G
a is a subgroup of G
i. When we say that
“the form F has symmetry G”, we are expressing that the form F belongs to the set S (G), which contains all the invariant shapes under transformations of the symmetry group, G. This can be denoted by
F ∈ S (G).
We may define a space of all the possible objects, or shapes, denoted by X = {X
i}
i∈N. So, we can assign to each element of X a crisp set containing all objects which fulfill all the conditions of G. We have the mapping G → S(G). For this purpose, we may introduce a membership function,
by
This characterizes the membership degree of the shape X to the set S(G), i.e. its degree of fulfillment of symmetry requirements which contains G. Hence, we have some different situations,
- -
full membership, when μG (X) = 1.
- -
null membership (or not membership at all), if μG (X) = 0.
- -
partial membership, when 0 < μG (X) < 1.
On the 2-D case, an also on 3-D and higher dimensions, we may consider the forms and their boundaries closed surfaces in R3. Therefore, it is feasible to describe them by selecting a convenient coordinate system.
Given an object O, we can define
By this, we obtain a new collection of nearest shapes appears, Θ = {O
ε}
ε > 0. This is the set of nearest neighboring shapes to the symmetrical O, relative to the
Symmetry Distance (SD) of the shape O
i to its reference pattern, O. Note that if 0 < ε ≤ ε´, then O
ε ⊆ O
ε´, because if O
i ∈ O
ε, then SD (O
i, O) ≤ SD (O
i´, O). So, we are now quantifying the distance departure from Symmetry in shape, as a continuous feature, instead of a discrete feature. We no longer consider only the total coincidence or the absolute difference, but the gradual “similarity” of an object to its Symmetrical shape.
This Distance from Symmetry in shape will be defined as the minimum mean squared distance required for the displacement of points from the original shape, in order to obtain a symmetrical shape. So, SD is the minimum effort required to turn a given shape into a symmetrical shape.
Every pair of such shapes (
V and
W, for instance) will be represented by their respective sequence of points,
Let
Then, the aforementioned metric,
m, will be defined as
by
Also we will define the Symmetric Transform of V, denoted ST (V), as the closest symmetric shape to V, relative to such metric.
For each vertex or node, representing a random variable in the graph, we have the probability distribution value associated with its position. So, each possible situation of such a node, in the corresponding slice, must possess a numerical image of the random variable, that jointly with the value of the symmetry distance to the corresponding node in the pattern object, O, will provide us a pair:
Describing probabilistically its position and how far it is of its symmetrical final place. Because we do not know previously the exact position of each node of the graph, in each slice, as we advance through the evolving structure. But what we know is the probability distribution, pi, associated with the position, that is, with what non-deterministic value such node goes to fill certain place.
4. Describing A Markov Process
It is possible to define a Markov Decision Process from this model, as a sequential chain of steps [
8]. In the randomized Markov process, each node only depends on the corresponding node, belonging to shapes in the same or the nearest slice (according to the Markov property).
We can take as
Total Expectancy Reward (TER), for the minimization process, the previously defined Symmetry Distance (SD) between the successive shapes. Also, it is possible to introduce a new reward function as inversely proportional to such SD translated to a value equal to one:
In such case, it would be natural to apply the procedure of maximization, avoiding the final problem of discontinuity.
According to be observable the system states, we construct a FOMDP (Fully Observable Markov Decision Process), being described without hidden variables. Associated with each step of this process, we have the “transition probabilities”. In the temporal instant t, the system will be in the state Si, after taking the action, or decision, ai: do (X = xi}), when it was in the state Si−1. The transition probability will be expressed as Pt (Si/Si−1, ai). But omitting the typical restriction of Markov Process, we arrive to Bayesian Nets (BNs). These will be expanded to Dynamic Bayesian Nets (DBNs), by the modeling explicit of the time. So, it generalizes many other models, as the Hidden Markov Models (HMMs).
The essential idea is the replication of shapes on a sequence of temporal points. Because starting from the random variables we may produce, as the process evolves, successive shapes. Then, we can reach a
Foliation of Bayesian Nets,
F, where each BN belongs to a temporal slice, and so, the total construct will be a Dynamic Bayesian Net,
It contains its corresponding slices. So, we can consider each shape immersed in its parallel plate (when we consider the 2-D particular case), into the global Foliation defined on BNs. So, this is a Dynamic Model, composed by a sequence of temporal BNs. Note that we are allowing the possible existence of arcs between the nodes of different slices, as temporal edges. Such slices are not necessarily each one only connected to the nearest (as it is the case in first order Markov chains). There is also another type of arcs possible; namely, the classical synchronal arcs, connecting nodes of BNs that belong to the same slice. Also we need to comment that such directed edges never will be pointing to the past, because of their dynamical character.
5. Shape Measures
Our purpose is to open the way to introduce some measures of asymmetry and skewness. Our aim is to classify, within a determinate standard distribution, its variations with respect to the model selected as totally symmetrical.
We analyze the Symmetry as it is related to the more general case,
i.e. multivariate probability distributions [
16,
17]. So, the univariate case may result as a mere simplification.
Let
be a random vector. And let
be the usual representation of mean, mode or median, very well-known centralization measures of the distribution. So,
Therefore, there are at least three n-dimensional vectors, corresponding to the aforementioned three measures.
There exist many examples of multivariate symmetry, according to the invariance of such “centered” random vector, X − α, under an appropriate family of transformations. For instance, and in increasing order of generality: spherical, elliptical, central and angular symmetry.
A random vector,
X, is said
symmetric of degree m, if there exists a vector
And a orthogonal transformation,
T, such that
This means that the distribution shows symmetries about m mutually orthogonal (n−1)-dimensional hyperplanes. Therefore, they will show up about their (n−m)-dimensional intersections. So, the distribution shows m orthogonal directions of symmetry.
6. Shape Parameters
The Shape parameters (denoted SP) are a class of numerical parameters that corresponds to a parametric family of probability distributions (PD). So, SP is any parameter of a PD that is neither a location parameter, nor a scale parameter. Such a parameter must affect the shape, rather than simply shifting or stretching the distribution.
Some distributions have shape parameters, as for instance, the Γ distribution, the β distribution… But many others do not have such SP, as the Normal, Exponential, Uniform or Distributions. For these continuous distributions with no SP, shape will be fixed. Therefore, only location and/or scale can change. The Skewness and Kurtosis of such distributions remains constant, because they are independent of location and scale parameters.
It is interesting to characterize the
Skewness, or
departure from symmetry. One approach is to model the skewness parametrically. Various extensions to the multivariate case have been proposed so far.
Skewness is a measure of asymmetry of the probability distribution of a random variable with values in the real line. We can classify the shapes according to the sign of their measure of Skewness,
Sk. How is it possible to measure this feature? The answer is through the third standardized moment, or third about the mean,
For n-valued samples, we express this as
With
x the standard sample mean.
If
Y is the sum of
n independent random variables, {X
i}
i = 1, 2,…, n, i. e. Y = ∑ X
i, all them being the same distribution as X, then
The more usual asymmetry indices are due to Pearson and Fisher.
The
Index of Pearson is denoted here by
AP, and it will be based on the relation between
mean (x) and
mode (Mo). It will be defined by
When the distribution is
symmetric, then
AP = 0. It is of
positive asymmetry when
AP > 0. And it is of
negative asymmetry when
AP < 0.
Nevertheless, being easier to calculate than the Fisher index, it results very unusual in practice, because it is only true when the distribution show certain features, as unimodal character, bell-shaping and only slightly asymmetric shape, etc.
The
Index of Fisher is denoted here by
AF, and it is based on the data difference relative to the mean. It will be defined by
But it shows some disadvantages, because it will be very influenced by atypical values.
In the case of Kurtosis (Kurt), it will be expressed as the fourth cumulant divided by the fourth power of the square root of the second cumulant,
i.e.
But it is more useful to introduce the
Coefficient of Kurtosis (
ck), by reducing three units in the precedent value,
The arithmetic operation (to take -3) is based in that the Kurtosis value for the Gaussian distribution is three, and so, we are measuring the deviation respect to the Normal,
i.e. its
“anti-gaussianity degree”. Therefore, for the Gaussian distribution the Kurtosis coefficient is null,
i.e.
According with the sign of such coefficient, we can classify the distributions as
- -
Mesokurtic, if cK = 0
- -
Leptokurtic, if cK > 0
- -
Platykurtic, if cK > 0
This means that the distribution shows a concentration degree around the central values of the variable. When the data distribution is symmetric, then Mean, Mode and Median coincide. So, the distribution presents the same shape to the right as to the left of the center.
Therefore, the features of the shape can be analyzed by these shape statistics,
- -
Skewness, describing the amount of asymmetry;
- -
Kurtosis, measuring the concentration of data around the “peak” of the distribution; and in its tails versus the concentration in its flanks.
7. Chirality Measure
The first question coming to mind is about its name:
What is Chirality? Let us start with a well known quotation of Lord Kelvin [
18]. “I call any geometrical figure, or group of points, chiral, and say that it has chirality, if its image, in a plane mirror, ideally realized, cannot be brought to coincide with itself”.
This opinion is supported by the classic and dichotomous division, totally symmetric versus totally asymmetric, without intermediate terms, in Euclidean sets.
A system is called chiral, if it differs from its mirror image, and such mirror image cannot be superposed on the original system. It is the famous case of our hands, our ears, and so on: it is impossible to make coincidence on our left hand over the mirror image of our right hand. For this simple reason, we need two different gloves, in order to cover our hands. Therefore, we say that an object is Chiral when it is non-isomorphic to its mirror-image. Its symmetry group only contains pure translations, pure rotations, and also screw rotations.
When a system or object is not chiral, we say that is
achiral (or also
amphichiral).For instance, the Helix and the Möbius string are 3-D chiral objects. Many other familiar objects exhibit the same chiral symmetry, as the human body. To see more details on Chirality, and on Symmetry in general, see the books and papers by Petitjean [
19,
20,
21] and Rosen [
1].
Both elements of the pair (original chiral object, and its mirror image) are denominated mutually Enantiomorphs, from the old Greek “opposite forms”. Its mutual relationship is named an Enantiomorphism. When it refers to molecules, we said Enantiomers.
The degree of such feature is measured by the
Chiral Index (here denoted
Chi, or simply by the symbol
χ). In the univariate case, it will be expressed from the lower bound of the correlation coefficient (ρ),
between the distribution and itself. Its mathematical expression will be
As a previous step, we must suppose the existence of two statistical parameters, variance and mean.
Obviously, if the object is Achiral (A), then its chiral index will be null, i.e. χ(A) = 0.
This property is very important in many fundamental scientific fields, as for instance, studying the geometry of the molecular structure in chemical compounds. It is possible to define such Chirality Measure for a space having any dimension, for which probability distributions may be very useful. Recall that considering the n-dimensional Euclidean space, a finite number of equally weighted points can be considered as n-dimensional distribution.
From a geometrical viewpoint, a figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry.
Recall that any
isometry can be written, in Euclidean geometry, as
with
A orthogonal matrix, and
b a vector.
If det (A) = 1, then the isometry is orientation-preserving. Otherwise, if det (A) = − 1, then the isometry is orientation-reversing.
In 2-D, every figure which has an axis of symmetry is achiral, and every bounded achiral figure must have an axis of symmetry. In 3-D, every figure (solid) that possesses a center of symmetry, or a plane of symmetry, is achiral.
Two more basic aspects are necessary. First, the Chiral Index may be invariant under isometric transformations applied to the probability distribution. And second, it may be independent of which particular mirror we have selected. The Chiral Index is definite for multivariate distributions, being derived from a probability metric, and having formal relations with the Monge-Kantorovich transportation problem.
The upper bound of χ for a multivariate distribution lies in the interval [0.5, 1], for any value. When n = 2, it is in the interval [1−(1/π), 1+(1/2π)]. But in general, the Chiral Index of a distribution is a real number in the closed unit interval [0, 1]. The value zero characterizes an achiral distribution. The value χ of the distribution of a random vector is indeed a measure of its degree of Skewness.
An achiral object may be superimposed on its mirror image, and then, its symmetry group possesses certain operations reversing its geometry, as can be then applied glide reflections, not being so possible by a direct movement on a rigid body.
Note: The first to observe the importance of Chirality in Chemistry was Louis Pasteur (1882–1895). Also, it is worth mentioning J. B. Biot (1774–1862), who found the connection between the chirality of crystals and the defection of the plane of polarization light passing through them.
8. Fuzzy Measure Theory
Recall some necessary definitions [
3,
4,
5,
6,
7,
8] for more details, on definitions, results and proofs, from such very important new mathematical theory.
Def. 1:
Let U be the universe of discourse, with ℘ σ-algebra on U. Given a function
it is possible to describe
m as a
Fuzzy Measure, when it verifies
- I)
m (∅) = 0
- II)
m (U) = 1
- III)
If A, B ∈ ℘, with A ⊆ B ⇒ m (A) ≤ m (B) [monotonicity]
When we take the Entropy concept, we attempt to measure the fuzziness, that is, the degree of being fuzzy for each element in ℘.
Def. 2:
The
Entropy can be designed as the function
Verifying
- I)
If A is a crisp set ⇒ H (A) = 0.
- II)
If H (x) = 1/2, for all x ∈ A ⇒ H (A) is maximal (total uncertainty).
- III)
If A is less fuzzified than B ⇒ H (A) ≤ H (B).
- IV)
H (A) = H (U∖A)
Note: It will be possible to define some type of Upper and Lower Entropy, according to Torra and Narukawa paper [
22].
As an illustrative example of usefulness of Fuzzy Entropy (FE) concept, many situations may be explained. So, for instance, we can to use the FE as a Cost Function in Image Processing. For this purpose, Pasha
et al. [
23] have introduced a threshold value in the image denoising problem.
Also FE may be applied on Fuzzy Regression Analysis using fuzzy linear models, with symmetric triangular fuzzy numbers. This was introduced by Tanaka
et al. [
24].
Def. 3:
The
Specificity Measure will be introduced as a measure of the confidence when we take decisions. Such Specificity Measure is a function
where
- I)
S p (∅) = 0.
- II)
S p (ϰ) = 1 ⇔ ϰ is a unitary set (singleton).
- III)
If ς and τ are normal fuzzy sets in U, with ς ⊂ τ ⇒ S p (ς) ≥ S p (τ).
Remark. [0, 1]U denotes the class of all the fuzzy sets on U.
9. Asymmetry and Symmetry Level Functions
Let (E, d) be a fuzzy metric space. Nevertheless, our results [
3,
4,
5,
6,
7,
8] may be generalized to some different spaces. We define a new fuzzy measure. Such function would be defined as one of the kind L
i, with i ∈ {a, s}, where
s denotes symmetry, and
a, asymmetry. Suppose that from here we denote by c (A) the cardinal of a fuzzy set, A.
Theorem 1: Let (E, d) be a fuzzy metric space, A being a subset of E, and let H and Sp be the above fuzzy measures defined on (E, d). Then, the function L
s, operating on A by
it will be a fuzzy measure.
Such measure would be called Symmetry Level Function.
Dually,
Theorem 2: Let (E, d) be a fuzzy metric space, A being a subset of E, and let H and Sp be the above fuzzy measures defined on (E, d). Then the function L
a, acting on A by
it will be also another fuzzy measure.
This measure would be called Asymmetry Level Function.
Corollary 1: On the above conditions, or hypothesis, the Symmetry Level Function will be a Normal Fuzzy Measure.
Corollary 2: On the above conditions, or hypothesis, the Asymmetry Level Function will be a Normal Fuzzy Measure.
Recall that it is possible to introduce the “integer part” function, denoted by
The values of the fuzzy measure
Sp decrease as the size of the considered set increases.
Also recall that the range of such Specificity Measure, Sp, will be the closed unit interval, [0, 1].
Corollary 3: Let (E, d) be a fuzzy metric space, and let {A
i}
i = 1,2,…,n be a contractive chain of enchained subsets as sub-worlds of the universe U = A, all them containing the fuzzy set
A,
i.e.
with
Corollary 4: On the same above mentioned hypotheses, we may obtain the composition of the initial asymmetry level with the integer part function (INT). So,
or
10. Conclusions
We dispose from now of a new measure quantifying the asymmetry level of shapes, useful for fuzzy sets. For this, we need to use a combination of fuzzy measures, derived from some related functions, as may be Entropy and Specificity Measures. Hence, the fundamental direction working on Symmetry and its properties may be geometrical, on problems of different fields.
Let us mention the analysis of crystalline structures by the Crystallographic Planar or Spatial Groups, as example. Also, it is possible as direct application of the classical Group Theory, on physical problems: Quantum Mechanics, Penrose tiles, Fractals, Chaos Theory, and so on. And closer to Computer Science, it is be related to Artificial Vision, Pattern Recognition, or analyzing symmetrical structures in Computational Linguistics or similar tasks on AI.
Basically, the precedent work related to these aspects was on Symmetry Groups, with the papers of Hermann Weyl and its very famous book,
Symmetry [
2]. About its application to Pattern Recognition, Artificial Vision and so on, the papers and presentations of Y. Liu on
Computational Symmetry [
22] are recommended. In his paper, Liu said that “symmetry is an essential mathematical concept, as well as a ubiquitous, observable phenomenon in nature, science and art. Either by evolution or by design, symmetry implies a potential structural efficiency gain that makes it universally appealing to computational science. Recognition and categorization of both, symmetry and regularity, may be the first step towards capturing the essential skeleton of a real world problem, while at the same time minimizing computational redundancy” [
25].
We have also considered the question of Symmetrical Patterns. Future research needs to focus on questions derived from the versatility of the real world, surpassing the relatively coarse and rigid old geometry (group theory included), which only permits a first approximation to more difficult problems on Artificial Intelligence.