# Cellular Automata and Artificial Brain Dynamics

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

^{−1.6}[27]. More recently, Hemmingsson [28] suggested that the GoL does not converge to a SOC state, but it is in fact subcritical with a long length scale. Additional evidence has also been put forward by Nordfalk et al [29], wherein this capacity of the GoL model suggests a connection to brain dynamics.

## 2. Game of Life: A Brief Introduction

- -
- Any active cell with fewer than two active neighbors, becomes inactive, i.e., under-population.
- -
- Any active cell with two or three active neighbors, lives on to the next generation.
- -
- Any active cell with more than three active neighbors becomes inactive, i.e., over-population.
- -
- Any inactive cell with exactly three active neighbors becomes active, i.e., reproduction.

^{l}/L, where l is the size of the minimal distinct configuration which evolves to the required structure in one time step [43]. The time needed for the system, when N is large, to reach equilibrium is approximately equal to N, N×N being the size of the box. This is not true for special small configurations, as we will discuss later. The final density of life is small, independent of the initial one, unless the initial density is very small or very big (then life disappears completely) [44].

_{c}∼0.9. The critical threshold α

_{c}separates two well-distinguished macroscopic behaviors or phases. The phase α > α

_{c}is the frozen phase, in which the system evolves with low-density patterns and quickly stabilizes to a fixed point. The second phase (α < α

_{c}) is the labyrinth phase, characterized by a steady state with higher density and the absence of stabilization at a fixed point [53]. Further research of the behavior of the GoL under different rules, i.e., different intervals of survival and fertility can be found in Reference [55]. However, all this complexity must not be confused with chaos. The GoL is not a chaotic system, it is deterministic but extremely complex. Life is not at the “border of chaos,” but thrives on the “border of extinction” [56].

## 3. The Basic Idea of the Model

^{10}–10

^{11}individual neurons and the total number of connections is even larger because every cell projects its synapses to a total of 10

^{4}–10

^{5}different neurons [59]. With such an immense number of connections, our model is not intended to accurately describe the human brain or any other real nervous system, but rather to examine overall properties of a big connected group of discrete “neuron-like” elements. As a first approximation, we can assume that live cells in the GoL correspond to active neurons and dead cells to inactive neurons. As such, if the cells in the 2D grid are assumed to be single neurons, then the model may be considered unrealistic, since usually, a single neuron is connected on average to 10,000 neurons, and not only eight. This can be modified easily; counting second NN (the second ring around the cell) cells would have 24 connections, counting third NN cells would have 48 and so on, but that would make the model much more expensive, computationally speaking. To count 100 connections, it would be necessary to consider up to 15 NNs, for 1000 connections it would be necessary to consider 49 NNs.

_{A}/A

_{0}where A

_{A}is the area in the simulation that has been active at least once, and A

_{0}is the initial active area (the initial number of live cells). The Activated Area grows faster for high densities (initial density ρ

_{i}≥ 15%) only during the first stages of the simulation (for times ≤ 500). After this first stage, the slope decreases, whilst for ρ

_{i}= 10%, A

_{A}continues growing with a similar slope. Hence, for the activated area, two stages can be differentiated in the evolution of the system. Initially, the area increase occurs at a fast rate, and then, after some time τ(ρ

_{i}), the rate of increase becomes slower. It is important to note that the ratio of A

_{A}/A

_{0}is > 1, but that does not mean multi-counting. When the activated area is computed, we count the cells that have been alive at least once. However, when a cell is counted, then it is no longer counted again. That is to say, if a cell is alive at time 0 (or time x), it is not counted again if it goes back to life at time n (> 0 (or > x)). The idea is very much like the random walk study presented in Reference [60]. The reason to have a ratio > 1 is that we start, for example, with an initial density of 10%, i.e., 10% of the initial area is active. If this is the case in a closed simulation, then the ratio can be as high as 10 at a certain time (and unlimited in an open (infinite) simulation). As for the reason to define the activated area in our model, there are studies where the spatial extension of brain activity is studied [61,62].

^{2}

^{2}(t) (r being the distance from the center of the box), hence we can write the velocity of the spreading signal as:

_{0}to 2A

_{0}, 3A

_{0}, etc., in only a few steps. At the same time, as has already been discussed, in few steps the density decreases, and this process is fast as well (see top panel of Figure 2), so that in a few steps the increase of the initial active area will not be “explosive” anymore.

_{A}increases from A

_{0}to 10 to 15 times A

_{0}, which means that the initial square is filled (i.e., A

_{A}is multiplied by a factor 100/ρ

_{i}) and then little spread is observed. In terms of the initial size, N×N, the final activated area (or sampled area) would be less than 2N × 2N, so that the signal (neglecting the propagation due to gliders that will continue growing forever) is not spreading more than N cells from the borders of the initial distribution. However, this is something that may vary from one simulation to the next, and it is certainly not true for special configurations such as the “Methuselahs,” as we will discuss later.

^{-4}mm/ms. If we calculate the velocity of growth of the active area during the first 100 steps, when A

_{A}grows fast, it is found that V = 10

^{−3}–10

^{−4}cells/step. If we assume that the timestep is in ms, in accordance to Reference [68], and cell size is of the order of mm

^{3}, then V is predicted to be higher than that of the CSD by one order of magnitude. To make this comparison, we have assumed a one-to-one correspondence between neurons and cells in our model. However, since the number of actual neurons represented by each CA cell is a user-defined parameter, in that sense the CA model is flexible and can achieve better agreement.

## 4. Spikes in the GoL Model

_{S}may be defined at each timestep as the total number of cells that have spiked at least once (obviously, A

_{S}as A

_{A}can never decrease). It is a well-known fact in neurophysiology that the same neuron can become inactive and active (firing) many times per second [72,73]. Regardless, real neurons have a period of refractoriness after which they can fire again. This period last about 15 milliseconds and in our model, we assumed that this discrete timestep corresponded to the discrete unit of time.

_{i}= 20%), at the very first stages, the number of spikes was found to be small in terms of the initial area (or initial population pi) and it grew, but not by much, within a few steps (less than 10). Then, after this initial reorganization, an almost flat curve was observed, especially for rules R4, R5, and R6. The average value for the first 200 steps after the aforementioned reorganization (from 20 to 220 steps) was as follows: 5.15, 1.48, 0.80, and 0.22 for R3, R4, R5, and R6, respectively (see bottom panel of Figure 7).

_{A}/A

_{S}converged fast to an almost constant value for R3, R4, and R5, but not as fast for R6. For R3 and R4 the ratio was almost 1, i.e., the spiked area becomes almost equal to the active area. However, the number of spikes with the R3 rule was more than 5 times larger than the one with R4. For R5 that ratio was 1.5 and for R6 the ratio was approximately 2.5 (see top panel of Figure 7).

_{S}at timestep 2000, for three simulations with a random initial configuration of density of 20% in a 1000 × 1000 grid and spiking rules R3, R4, and R5, respectively. The top panel presents the respective active areas (as has already been discussed) and the colors for the sake of clarity have been limited between 0 and 20. The shape and size of both areas (active and spiked) vary significantly between different simulations, a fact arising from the different initial random configurations. However, the ratio A

_{A}/A

_{S}has not changed and remains almost constant at 1 for R3 and R4 and 1.5 for R5, as discussed in the previous paragraph.

## 5. Defects and Percolation

_{i}= 20% each, for different densities of defects in the middle region (i.e., the same configuration as described for the simulations presented in Figure 13). The result was not absolute (or was relative) in two ways. First, it presented only single simulations and not averaged curves. For the same conditions, the results changed significantly from one simulation to another, and in some cases, both communities would not interact at all. The idea was to demonstrate that the lower the number of defects, the easier it becomes for the two regions (upper and lower) to interact (i.e., lesser time is needed for the mixed density to become different from zero). Moreover, the interaction becomes stronger, if we consider the area under the density curve as a measure of strength (See Video 4 in the supplementary material).

## 6. Extensions of Game of Life

## 7. Conclusions

^{8}or 10

^{9}neurons). As Anderson said “More is different” [85], so the results regarding the defects can be very size dependent.

## Supplementary Materials

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Turing, A.M. Computing machinery and intelligence. Mind
**1950**, 59, 433–460. [Google Scholar] [CrossRef] - Sarkar, P. A brief history of cellular automata. ACM Comput. Surv.
**2000**, 32, 80–107. [Google Scholar] [CrossRef][Green Version] - Ilachinski, A. Cellular Automata; World Scientific Publishing: Singapore, 2001. [Google Scholar]
- Bard-Ermentrout, G.; Edelstein-Keshet, L. Cellular Automata Approaches to Biological Modeling. J. Theor. Biol.
**1993**, 160, 97–133. [Google Scholar] [CrossRef] [PubMed] - Boccara, N.; Roblin, O.; Roger, M. Automata network predator-prey model with pursuit and evasion. Phys. Rev. E
**1994**, 50, 4531. [Google Scholar] [CrossRef] - Gerhardt, M.; Schuster, H. A cellular automaton describing the formation of spatially ordered structures in chemical systems. Phys. D
**1989**, 36, 209–221. [Google Scholar] [CrossRef] - Zhu, M.F.; Lee, S.Y.; Hong, C.P. Modified cellular automaton model for the prediction of dendritic growth with melt convection. Phys. Rev. E
**2004**, 69, 061610. [Google Scholar] [CrossRef] [PubMed] - Bard Ermentrout, G.; Edelstein-Keshet, L. Cellular automata approaches to biological modeling. J. Theor. Biol.
**1993**, 160, 97–133. [Google Scholar] [CrossRef] [PubMed] - Kansal, A.R.; Torquato, S. Simulated Brain Tumor Growth Dynamics Using a Three-Dimensional Cellular Automaton. J. Theor. Biol.
**2000**, 203, 367–382. [Google Scholar] [CrossRef] [PubMed][Green Version] - Hoffman, M.I. A Cellular Automaton Model Based on Cortical Physiology. Complex Syst.
**1987**, 1, 187–202. [Google Scholar] - Hopfield, J. Biophysics Neural networks and physical systems with emergent collective computational abilities. PNAS
**1982**, 79, 2554. [Google Scholar] [CrossRef] [PubMed] - Tsoutsouras, V.; Sirakoulis, G.C.; Pavlos, G.P.; Iliopoulos, A.C. Simulation of healthy and epileptiform brain activity using cellular automata. Int. J. Bifurc. Chaos.
**2012**, 22, 9. [Google Scholar] [CrossRef] - Acedo, L.; Lamprianidou, E.; Moraño, J.-A.; Villanueva-Oller, J.; Villanueva, R.-J. Firing patterns in a random network cellular automata model of the brain. Phys. A
**2015**, 435, 111–119. [Google Scholar] [CrossRef][Green Version] - Wolfram, S. A New Kind of Science; Wolfram Media Inc.: Champaign, IL, USA, 2002. [Google Scholar]
- Chialvo, D.R. Critical brain dynamics at large scale. In Criticality in Neural Systems; Niebur, E., Plenz, D., Schuster, H.G., Eds.; John Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
- Chialvo, D.R. Emergent complex neural dynamics. Nat. Phys.
**2010**, 6, 744. [Google Scholar] [CrossRef] - Haimovici, A.; Tagliazucchi, A.E.; Balenzuela, P.; Chialvo, D.R. Brain organization into resting state networks emerges at criticality on a model of the human connectome. Phys. Rev. Lett.
**2013**, 110, 178101. [Google Scholar] [CrossRef] [PubMed] - Luković, M.; Vanni, F.; Svenkeson, A.; Grigolini, P. Transmission of information at criticality. Phys. A
**2014**, 416, 430–438. [Google Scholar] [CrossRef] - Priesemann, V.; Valderrama, M. Neuronal Avalanches Differ from Wakefulness to Deep Sleep–Evidence from Intracranial Depth Recordings in Humans. PLoS Comput. Biol. 2013. [CrossRef] [PubMed]
- Priesemann, V.; Wibral, M.; Valderrama, M.; Pröpper, R.; Le Van Quyen, M.; Geisel, T.; Munk, M.H. Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front. Syst. Neurosci.
**2014**, 8, 108. [Google Scholar] [CrossRef] [PubMed] - Wibral, M.; Lizier, J.; Vögler, S.; Priesemann, V.; Galuske, R. Local active information storage as a tool to understand distributed neural information processing. Front. Neuroinf.
**2014**, 8, 1. [Google Scholar] [CrossRef] [PubMed] - Langton, C.G. Computation at the edge of chaos: Phase transitions and emergent computation. Phys. D
**1990**, 42, 12–37. [Google Scholar] [CrossRef] - Beggs, J.M.; Timme, N. Being critical of criticality in the brain. Front. Physiol.
**2012**, 3, 163. [Google Scholar] [CrossRef] [PubMed] - Friedman, N.; Ito, S.; Brinkman, B.A.; Shimono, M.; DeVille, R.L.; Dahmen, K.A.; Butler, T.C. Universal Critical Dynamics in High Resolution Neuronal Avalanche Data. Phys. Rev. Lett.
**2012**, 108, 208102. [Google Scholar] [CrossRef] [PubMed] - Kello, C.T. Critical Branching Neural Networks. Psychol. Rev.
**2013**, 120, 230–254. [Google Scholar] [CrossRef] [PubMed] - Werner, G. Metastability, Criticality and Phase Transitions in brain and its models. Biosystems
**2007**, 90, 496–508. [Google Scholar] [CrossRef] [PubMed] - Bak, P.; Chen, K.; Creutz, M. Self-organized criticality in the ‘Game of Life’. Nature
**1989**, 342, 780. [Google Scholar] [CrossRef] - Hemmingsson, J. Consistent results on ‘Life’. Phys. D
**1995**, 80, 151. [Google Scholar] [CrossRef] - Nordfalk, J.; Alstrøm, P. Phase transitions near the "game of Life". Phys. Rev. E
**1996**, 54, R1025. [Google Scholar] [CrossRef] - Ninagawa, S.; Yoneda, M.; Hirose, S. 1/f fluctuation in the ‘Game of Life’. Phys. D
**1998**, 118, 49–52. [Google Scholar] [CrossRef] - Allegrini, P.; Menicucci, D.; Bedini, R.; Fronzoni, L.; Gemignani, A.; Grigolini, P.; Paradisi, P. Spontaneous brain activity as a source of ideal 1/f noise. Phys. Rev. E
**2009**, 80, 061914. [Google Scholar] [CrossRef] [PubMed] - Fox, M.D.; Raichle, M.E. Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat. Rev. Neurosci.
**2007**, 8, 700–711. [Google Scholar] [CrossRef] [PubMed] - Linkenkaer-Hansen, K.; Nikouline, V.V.; Palva, J.M.; Ilmoniemi, R.J. Long-range temporal correlations and scaling behavior in human brain oscillations. J. Neurosci.
**2001**, 21, 1370–1377. [Google Scholar] [CrossRef] [PubMed] - Gilden, D.L.; Thornton, T.; Mallon, M.W. 1/f Noise in Human Cognition. Science
**1995**, 267, 1837–1839. [Google Scholar] [CrossRef] [PubMed] - Bédard, C.; Kröger, H.; Destexhe, A. Does the 1/f Frequency Scaling of Brain Signals Reflect Self-Organized Critical States? Phys. Rev. Lett.
**2006**, 97, 118102. [Google Scholar] [CrossRef] [PubMed] - Wolfram, S. Statistical mechanics of cellular automata. Rev. Mod. Phys.
**1983**, 55, 601. [Google Scholar] [CrossRef] - Chapman, P. “Life Universal Computer”. Available online: http://www.igblan.free-online.co.uk/igblan/ca/ (accessed on 15 November 2018).
- Berlekamp, E.R.; Conway, J.H.; Guy, R.K. Winning Ways for your Mathematical Plays, 2nd ed.; A. K. Peters Ltd.: Natick, MA, USA, 2004. [Google Scholar]
- Rendell, P. Turing universality of the game of life. In Collision-Based Computation; Adamatzky, A., Ed.; Springer: London, UK, 2002; p. 513. [Google Scholar]
- Rennard, J.P. Implementation of logic functions in the game of life. In Collision-Based Computation; Adamatzky, A., Ed.; Springer: London, UK, 2002; p. 491. [Google Scholar]
- Koslow, S.H.; Huerta, M.F. Neuroinformatics: An Overview of the Human Brain Project; Lawrence Erlbaum Associates, Inc.: Mahwah, NJ, USA, 1997. [Google Scholar]
- Bagnoli, F.; Rechtman, R.; Ruffo, S. Some facts of life. Phys. A
**1991**, 171, 249–264. [Google Scholar] [CrossRef] - Buckingham, D.J. Some facts of life. Byte
**1978**, 3, 54. [Google Scholar] - Monetti, R.A.; Albano, E.V. Critical edge between frozen extinction and chaotic life. Phys. Rev. E
**1995**, 52, 5825. [Google Scholar] [CrossRef] - Schulman, L.S.; Seiden, P.E. Statistical Mechanics of a Dynamical System Based on Conway’s Game of Life. J. Stat. Phys.
**1978**, 19, 3. [Google Scholar] [CrossRef] - Garcia, J.B.C.; Gomes, M.A.F.; Jyh, T.I.; Ren, T.I.; Sales, T.R.M. Nonlinear dynamics of the cellular-automaton "game of Life". Phys. Rev. E
**1993**, 48, 3345. [Google Scholar] [CrossRef] - Kayama, Y. Complex networks derived from cellular automata. arXiv, 2010; arXiv:1009.4509. [Google Scholar]
- Kayama, Y. Network representation of cellular automata. In Proceedings of the 2011 IEEE Symposium on Artificial Life, Paris, France, 11–15 April 2011; pp. 194–202. [Google Scholar]
- Kayama, Y.; Imamura, Y. Network representation of the game of life. J. Artif. Intell. Soft Comput. Res.
**2011**, 1, 233–240. [Google Scholar] - Huang, S.-Y.; Zou, X.-W.; Tan, Z.J.; Jin, Z.Z. Network-induced nonequilibrium phase transition in the “game of Life”. Phys. Rev. E
**2003**, 67, 026107. [Google Scholar] [CrossRef] [PubMed] - Fates, N.; Morvan, M. Perturbing the Topology of the Game of Life Increases Its Robustness to Asynchrony. In Proceedings of the International Conference on Cellular Automata, Amsterdam, The Netherlands, 25–28 October 2004; pp. 111–120. [Google Scholar]
- Lee, J.; Adachia, S.; Peper, F.; Morita, K. Asynchronous game of life. Phys. D
**2004**, 194, 369–384. [Google Scholar] [CrossRef] - Blok, H.J.; Bergersen, B. Synchronous versus asynchronous updating in the "game of Life". Phys. Rev. E
**1999**, 59, 3876–3879. [Google Scholar] [CrossRef] - Schönfisch, B.; de Roos, A. Synchronous and asynchronous updating in cellular automata. BioSystems
**1999**, 51, 123–143. [Google Scholar] [CrossRef] - Reia, S.M.; Kinouchi, O. Conway’s game of life is a near-critical metastable state in the multiverse of cellular automata. Phys. Rev. E
**2014**, 89, 052123. [Google Scholar] [CrossRef] [PubMed] - De la Torre, A.C.; Mártin, H.O. A survey of cellular automata like the “game of life”. Phys. A
**1997**, 240, 560–570. [Google Scholar] [CrossRef] - Beer, R.D. Autopoiesis and Cognition in the Game of Life. Artif. Life
**2004**, 10, 309–326. [Google Scholar] [CrossRef] [PubMed][Green Version] - Beer, R.D. The Cognitive Domain of a Glider in the Game of Life. Artif. Life
**2014**, 20, 183–206. [Google Scholar] [CrossRef] [PubMed] - Braitenberg, V.; Schόz, A. Statistics and Geometry of Neuronal Connectivity; Springer-Verlag: Berlin, Germany, 1998. [Google Scholar]
- Yuste, S.B.; Acedo, L. Number of distinct sites visited by N random walkers on a Euclidean lattice. Phys. Rev. E
**2000**, 61, 2340. [Google Scholar] [CrossRef] - Lachaux, J.P.; Pezard, L.; Garnero, L.; Pelte, C.; Renault, B.; Varela, F.J.; Martinerie, J. Spatial extension of brain activity fools the single-channel reconstruction of EEG dynamics. Hum. Brain Mapp.
**1997**, 5, 26–47. [Google Scholar] [CrossRef] - McDowell, J.E.; Kissler, J.M.; Berg, P.; Dyckman, K.A.; Gao, Y.; Rockstroh, B.; Clementz, B.A. Electroencephalography/magnetoencephalography study of cortical activities preceding prosaccades and antisaccades. Neuroreport
**2005**, 16, 663–668. [Google Scholar] [CrossRef] [PubMed] - Holsheimer, J.; Feenstra, B.W.A. Volume conduction and EEG measurements within the brain: A quantitative approach to the influence of electrical spread on the linear relationship of activity measured at different locations. Electroencephalogr. Clin. Neurophysiol.
**1977**, 43, 52–58. [Google Scholar] [CrossRef][Green Version] - Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol.
**1952**, 117, 500–544. [Google Scholar] [CrossRef] [PubMed][Green Version] - Leao, A.P.P. Pial circulation and spreading depression of activity in the cerebral cortex. J. Neurophysiol.
**1944**, 7, 391–396. [Google Scholar] [CrossRef] - Porooshani, H.; Porooshani, G.H.; Gannon, L.; Kyle, G.M. Speed of progression of migrainous visual aura measured by sequential field assessment. Neuro-Ophthalmology
**2004**, 28, 101–105. [Google Scholar] [CrossRef] - Ayata, C.; Lauritzen, M. Spreading Depression, Spreading Depolarizations, and the Cerebral Vasculature. Physiol. Rev.
**2015**, 95, 953–993. [Google Scholar] [CrossRef] [PubMed] - Acedo, L.; Moraño, J.-A. Brain oscillations in a random neural network. Math. Comput. Model.
**2013**, 57, 1768–1772. [Google Scholar] [CrossRef] - Hutsler, J.J. The specialized structure of human language cortex: Pyramidal cell size asymmetries within auditory and language-associated regions of the temporal lobes. Brain Lang.
**2003**, 86, 226–242. [Google Scholar] [CrossRef] - Shusterman, V.; Troy, W.C. From baseline to epileptiform activity: A path to synchronized rhythmicity in large-scale neural networks. Phys. Rev. E
**2008**, 77, 061911. [Google Scholar] [CrossRef] [PubMed] - Towle, V.L.; Ahmad, F.; Kohrman, M.; Hecox, K.; Chkhenkeli, S. Electrocorticographic Coherence Patterns of Epileptic Seizures. In Epilepsy as a Dynamic Disease; Milton, J., Jung, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Wilson, H.R.; Cowan, J.D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J.
**1972**, 12, 1–24. [Google Scholar] [CrossRef] - Wilson, H.R.; Cowan, J.D. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik
**1973**, 13, 55–80. [Google Scholar] [CrossRef] [PubMed] - Conway’s Game of Life. Examples of patterns. Available online: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Examples_of_patterns (accessed on 15 November 2018).
- Wass, S. Distortions and disconnections: Disrupted brain connectivity in autism. Brain Cognit.
**2011**, 75, 18–28. [Google Scholar] [CrossRef] [PubMed] - Zhang, H.-Y.; Wang, S.-J.; Liu, B.; Ma, Z.L.; Yang, M.; Zhang, Z.J.; Teng, G.J. Resting Brain Connectivity: Changes during the Progress of Alzheimer Disease. Radiology
**2010**, 256, 2. [Google Scholar] [CrossRef] [PubMed] - Supekar, K.; Menon, V.; Rubin, D.; Musen, M.; Greicius, M.D. Network Analysis of Intrinsic Functional Brain Connectivity in Alzheimer’s Disease. PLoS Comput. Biol.
**2011**, 4, e1000100. [Google Scholar] [CrossRef] [PubMed] - Vissers, M.E.; Cohen, M.X.; Geurts, H.M. Brain connectivity and high functioning autism: A promising path of research that needs refined models, methodological convergence, and stronger behavioral links. Neurosci. Biobehav. Rev.
**2012**, 36, 604–625. [Google Scholar] [CrossRef] [PubMed] - Gardner, M. The fantastic combinations of John Conway’s new solitaire game ’life’. Sci. Am.
**1970**, 223, 120–123. [Google Scholar] [CrossRef] - Packard, N.H.; Wolfram, S. Two-Dimensional Cellular Automata. J. Stat. Phys.
**1985**, 38, 901–946. [Google Scholar] [CrossRef] - Benayoun, M.; Cowan, J.D.; van Drongelen, W.; Wallace, E. Avalanches in a stochastic model of spiking neurons. PLoS Comput. Biol.
**2010**, 6, e1000846. [Google Scholar] [CrossRef] [PubMed] - Nunomura, A.; Perry, G.; Aliev, G.; Hirai, K.; Takeda, A.; Balraj, E.K.; Jones, P.K.; Ghanbari, H.; Wataya, T.; Shimohama, S.; et al. Oxidative damage is the earliest event of Alzheimer’s disease. J. Neuropathol. Exp. Neurol.
**2001**, 6, 759–767. [Google Scholar] [CrossRef] - Bays, C. Candidates for the Game of Life in Three Dimensions. Complex Syst.
**1987**, 1, 373–400. [Google Scholar] - Kitamura, T.; Ogawa, S.K.; Roy, D.S.; Okuyama, T.; Morrissey, M.D.; Smith, L.M.; Redondo, R.L.; Tonegawa, S. Engrams and circuits crucial for systems consolidation of a memory. Science
**2017**, 356, 73–78. [Google Scholar] [CrossRef] [PubMed][Green Version] - Anderson, P.W. More is Different. Science
**1972**, 177, 393–396. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**(Color online) Example of the most common stable clusters in the GoL (blinkers, blocks, beehives, etc.) and an example of a not-so-common cluster (pulsar). Green cells are the active cells, blue are inactive, and red ones represent defects (see Section 5). Numbers stand for the probability of appearance [42]. Inset. Glider and its movement. Active cells are represented by a black square, while inactive cells are empty. This glider moves diagonally downward and to the right by one cell every four updates.

**Figure 2.**(Color online) Density (top panel) and Activated Area (bottom panel) versus time for five different initial densities, ranging from 10% (black), 15% (red), 20% (green), 25% (blue), and 30% (magenta).

**Figure 3.**(Color online) Density x Active area vs. time for five different initial densities, from 10 to 30%, as labeled.

**Figure 4.**(Color online) Activity (as defined in Equation (1)) vs. timestep for three different initial densities (20, 30, and 40%), as labelled. Note the logarithmic scale on both axes.

**Figure 5.**(Color online) Three different simulations with random configuration of equal initial density (20%). Top panel: Active area A

_{A}. The straight lines correspond to the traces left by gliders. Bottom panel: Spiked area A

_{S}, i.e., neurons that have spiked at least once. From left to right, Spiking Rules R3, R4, and R5, respectively (see Section 4). Color scale in both panels is between 0 (dark blue) and 20 (red). All results correspond to time 2000.

**Figure 6.**(Color online). Velocity vs. time for five different initial densities as labelled. The green dashed curve shows the result assuming a simple random walk. Note the logarithmic scale on both axes.

**Figure 7.**(Color online) Top panel: Ratio between the activated area and the spiked area, A

_{A}/A

_{S}vs. time. Bottom panel: Number of spikes divided by the total initial population (i.e., 200,000), as a percentage of this.

**Figure 8.**Some well-known Methuselahs; Left to right: (a) acorn, (b) r-pentonimo, (c) diehard, as shown in Reference [74].

**Figure 9.**(Color online) Acorn simulation and spikes. (See Video 2). Cells that spiked at least once after 2000 steps. Color scale, as previously discussed, is limited between 0 (dark blue) and 20 (red). From left to right, Rules 3, 4, 5, and 6, respectively.

**Figure 10.**(Color online) The acorn simulation. Spiked area (top panel) and total number of spikes (bottom panel) vs. time (See Video 2) with different spiking rules, as labeled. The dashed black line on top represents the total active area A

_{A}.

**Figure 11.**(Color online) Total number of spikes (Top panel) and averaged number of spikes per neuron (i.e., total number of spikes divided by total population, Bottom panel) vs. time for five different initial densities as labelled: 10 (black), 15 (red), 20 (green), 25 (dark blue), and 30% (magenta).

**Figure 12.**(Color online) Example of the mixing interaction of the two separated communities (up and down). The color scale corresponds to the number of timesteps that the cells have been active. See Video 3 in the supplementary material.

**Figure 13.**(Color online) Density (top panel) and Active Area, A

_{A}, divided by initial area A

_{0}(bottom panel) vs. time for four different simulations adding randomly distributed defects to the lattice, from 0% to 3% as labelled.

**Figure 14.**(Color online) Top panel: Activated Area after 1000 steps for initial density = 15%. Bottom panel: Spiked Area applying Rule 4 (scale goes from 0 (dark blue) to 20 (red)). From left to right, the percentage of defects is 0%, 1%, 2%, and 3% as labelled.

**Figure 15.**(Color online) Mixed density vs. time for five different percentages of defects as labelled. As can be seen, the lower the number of defects, the easier it is for the two blocks to interact.

**Figure 16.**(Color online) Effect of the defects in the “transmission efficiency” of signals. Left figure shows the number of timesteps needed for the two communities to interact depending on the percentage of defects, for two different gaps, N/3, N being either 100 (blue squares) or 60 (red circles). (See videos 3 and 4). Right figure shows the time needed for the “transmission of the signal” between the regions “upper” and “lower,” depending on the initial density (in%) of the regions.

**Figure 17.**(Color on line) Dependence on the percentage of defects of several properties. For initial densities ρi = 15%: Top Left: Active area, A

_{A}, divided by the initial area A

_{0}. Top Right: Spiked area A

_{S}, divided by A

_{0}. Bottom left: Equilibration time, bottom right, ratio #blocks/#blinkers. Straight line represents the linear fit of both results. Red squares and blue triangles are the results using open and closed boundaries, respectively.

**Figure 18.**Most common clusters in the equilibrium state of the extended GoL B789/S69. (

**a**) Orthogonal glider: Moves a cell forward in 5 steps, parallel or perpendicular to the grid. (

**b**) Diagonal glider: Moves a cell in four steps. (

**c**), (

**d**) and (

**e**), three different stable clusters. (

**f**) Blinker. Period two.

**Figure 19.**(Color online) Top panel: Activated area vs. time for different initial densities as labelled. Bottom panel: Density vs. time for different initial densities. Final density is approximately 0.9%. Second NN configuration, 69/789 rule applied.

**Table 1.**The most common clusters found in the GoL. Columns: Name of object, Size (number of live cells per object), Period (0 means no period exists), Heat (number of changing cells), Influence area (all cells which are in the Moore-neighborhood of each live cell of an object).

Name | Size | Period | Heat | Influence |
---|---|---|---|---|

Blinker | 3 | 2 | 4 | 21 |

Block | 4 | 1 | - | 16 |

Beehive | 6 | 1 | - | 26 |

Loaf | 7 | 1 | - | 30 |

Glider | 5 | 4 | 5 | 22 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Fraile, A.; Panagiotakis, E.; Christakis, N.; Acedo, L. Cellular Automata and Artificial Brain Dynamics. *Math. Comput. Appl.* **2018**, *23*, 75.
https://doi.org/10.3390/mca23040075

**AMA Style**

Fraile A, Panagiotakis E, Christakis N, Acedo L. Cellular Automata and Artificial Brain Dynamics. *Mathematical and Computational Applications*. 2018; 23(4):75.
https://doi.org/10.3390/mca23040075

**Chicago/Turabian Style**

Fraile, Alberto, Emmanouil Panagiotakis, Nicholas Christakis, and Luis Acedo. 2018. "Cellular Automata and Artificial Brain Dynamics" *Mathematical and Computational Applications* 23, no. 4: 75.
https://doi.org/10.3390/mca23040075