Next Article in Journal
Evaluation of the Influence of the Primary Energy Factor of Hydropower Plants in the Methodology for Assessing the Energy Performance of Buildings
Previous Article in Journal
Quantum Gravity Strategy for the Production of Dark Matter Using Cavitation by Minimum Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Interpreting the High Energy Consumption of the Brain at Rest †

by
Alejandro Chinea Manrique de Lara
Departamento de Fisica Fundamental, Facultad de Ciencias de la UNED, Paseo Senda del Rey No. 9, 28040 Madrid, Spain
Presented at the 5th International Electronic Conference on Entropy and Its Applications, 18–30 November 2019; Available online: https://ecea-5.sciforum.net/.
Proceedings 2020, 46(1), 30; https://doi.org/10.3390/ecea-5-06694
Published: 18 November 2019

Abstract

:
The notion that the brain has a resting state mode of functioning has received increasing attention in recent years. The idea derives from experimental observations that showed a relatively spatially and temporally uniform high level of neuronal activity when no explicit task was being performed. Surprisingly, the total energy consumption supporting neuronal firing in this conscious awake baseline state is orders of magnitude larger than the energy changes during stimulation. This paper presents a novel and counterintuitive explanation of the high energy consumption of the brain at rest obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge and to the efficiency of brain’s computations. The high energy consumption of the brain at rest is shown to be the result of the energetics associated to the most probable state of a statistical physics model aimed at capturing the behavior of a system constrained by power consumption and evolutionary designed to minimize metabolic demands.

1. Introduction

In the last few years, there has been a growing interest from the neuroscience community in understanding spontaneous brain activity and its relation to cognition and behavior [1]. The principal idea derived from functional brain imaging observations in humans that revealed a consistent network of brain regions showing high levels of activity when no explicit task was being performed and participants were are asked simply to rest. This network of regions active in rest states was denoted as default network and the activity within this network during rest states as default activity [2,3].
Furthermore, the claim on the importance of this brain “baseline” or “default” state was based on the fact that brain activity in this state (i.e., the default network) is quantitatively different from the states evoked by goal-oriented activity [4]. This state is characterized by a relatively spatially and temporally uniform high level of neuronal activity, in other words the default network is more active at rest than it is in a range of explicit tasks. However, the most striking result is the fact that the energy consumption supporting neuronal firing in the baseline state is one to two orders of magnitude larger than the energy changes during stimulation [5]. This fact is of particular interest taking into consideration that the computations performed by brains are limited by power consumption, and at the same time all the metabolic transformations that take place in the brain obey the laws of thermodynamics. The second law of thermodynamics, in particular, limits the thermodynamic efficiency with which information can be processed.
Having said this, it is important to emphasize that from a thermodynamics point of view neurons are isothermal systems that function essentially at constant temperature and pressure. In every metabolic process that take place in neurons, there is a loss of useful energy (free energy) and an inevitable increase in the amount of unusable energy (heat and entropy). Neurons acquire energy from nutrient molecules and transform this free energy into ATP and other energy-rich compounds capable of providing energy for biological work at constant temperature (i.e., neural computations) since heat flow is not a source of energy for neurons (i.e., heat can do work only as it passes to a zone or object at a lower temperature) [6]. Neuronal computation is energetically expensive. Consequently, the brain’s limited energy supply imposes constraints on its information processing capability. Surprisingly, nature evolved brains are impressively efficient (e.g., the human brain is about 7 or 8 orders of magnitude more power efficient than the best of the silicon chips) [7]. A direct consequence of their energy efficiency is that brains can perform a huge amount of operations per second. For example, the brain of the common housefly, performs about 10 11 operations per second when merely resting [8].
The interpretation of the greater activity of the default network during passive tasks states (i.e., brain activity not clearly associated with any particular stimulus or behavior) has long been a challenge for neuroscientists. For example, the high energy consumption of the brain at rest has been correlated with the state of consciousness [5], neural resources dedicated to surviving future moments (i.e., consolidate the past, stabilize brain ensembles, and prepare us for the future) [2,9], or simply evidencing higher mental faculties that are unique to humans [10,11]. Nowadays, it is commonly believed that the baseline brain energy is evidencing a more fundamental or intrinsic property of functional brain organization that might help in understanding not only both normal and pathological brain function but also to explore differences in-between and accross species from a comparative anatomy point of view [2,11], however, this fundamental brain principle is still unknown.
In this paper, the focus is placed on the analysis of the high energy consumption of the brain at rest from the perspective of the intelligence and embodiment hypothesis [12,13] to reveal further insights that are of relevance to this intrinsic property of brain functional organization. The hypothesis postulates the existence of a common information-processing principle that is being exploited in naturally evolved nervous systems, and that this serves as the founding pillar of intelligence. The efficiency of brain computations is explained in terms of the concept of logical reversibility, that is, the idea of adiabatic computations since nature-evolved brains are intensively memory-based structures and memory is a necessary but not sufficient condition for the existence of logical reversibility. From an information-processing point of view, an operation is said to be logically reversible if its inputs can always be deduced from its outputs, in other words operations whose inverse is unique. A mathematical model grounded in Statistical Mechanics [14,15] that is suited for making quantitative predictions about the cognitive capacities of species for comparative purposes is also provided.
The rest of the paper is organized as follows: In the next section, the principal ideas behind the Intelligence and Embodiment hypothesis are reviewed. In turn, the energy model associated to the statistical mechanics model of the hypothesis is also reviewed. Specifically, the emphasis is put in two properties of the model: Firstly, the random structures associated to the cognitive architecture of the hypothesis model with a certain level of abstraction the patterns of functional connectivity that are observed between regions of the cerebral cortex and its homologues. Secondly, the observed energetic efficiency of nature evolved brains is explicitly captured in the energy model of the hypothesis. Afterwards, taking these considerations into account the properties of these random structures are studied using as parameter the connectivity between brain representations. This analysis in combination with basic probability inequalities is devoted to the precise characterization of properties of these random structures that hold with high probability such as their most probable state. Finally, a summary of the present study with some concluding remarks is presented.

2. Materials and Methods

2.1. Hypothesis Background

The intelligence and embodiment hypothesis [12,13] subscribes to the idea of cognition as an evolutionary adaptational redeployment of movement control [16]. This term is used to express, from an evolutionary perspective the idea that the extensive neuronal machinery developed to control animal movement was expanded to control new brain structures instead of muscles. This information-processing principle common to the nervous systems that evolved in nature is postulated to be the basis for the emergence of intelligence. Furthermore, the hypothesis associates this information-processing principle to the cerebral cortex and its homologues (i.e., functionally similar cerebral structures in other species), in other words, the neurobiological structures that emerged as a result of the aforementioned evolutionary adaptational redeployment of movement control.
From a computational point of view the hypothesis describes this information-processing principle in terms of combinatorial operations that are carried out over brain representations (Figure 1 and Figure 2). Brain representations are assumed to be physically encoded by neuron assemblies (i.e., as states of attractor networks). In other words, a piece of information is represented as the simultaneous activity of a group of neurons [16]. This neural computational mechanism is denoted as movement primitives to emphasize the fact that is inherited from primitive animals as a result of the phylogenetic conservation principle. These movement primitives are postulated as the underlying substrate that is common to any cognitive function. They are proposed to put forward for consideration the idea of an information-processing mechanism that is designed to combine sensorimotor representations into rule-conforming higher-order brain representations. Such high-order or more complex representations are postulated to be the basis for context-independent representations in a hierarchically organized information-processing system.
A cognitive architecture is defined to capture the aforementioned principles (Figure 1), and also some aspects of the structural (e.g., physiologic) and functional (e.g., dynamic) characteristics of these cerebral structures with a certain level of abstraction. The cognitive architecture consists of a hierarchical symbol-based architecture of varying size and complexity to account for certain neurobiological differences between species, where the symbols act as embodiments of brain representations. These hierarchical symbol-based structures are used to account for certain neurobiological differences between species, and especially to model the fact that progressive functional abstraction over network depth is a fundamental feature of brains [17], that is, sensorimotor representations are assumed to be encoded by leaf symbols whereas deep knowledge representations are encoded by symbols belonging to levels close to the root of the tree. Furthermore, the interactions between symbols in the cognitive architecture are grounded in the concept of movement primitives. Specifically, they are modeled in terms of permutations operations between symbols (circular arrrows in Figure 1 and Figure 2).
The hypothesis is accompanied with a mathematical model grounded in statistical mechanics that is used to analyze the information-processing capacities of the described architecture. The energy model used takes into consideration that computations performed by nervous systems are limited by power consumption and the fact that nature evolved brains are impressively efficient.Furthermore, the main innovative contribution of the model is the analysis of the cognitive architecture in terms of its free energy and entropy. The expectation of the entropy is used as an approximated measure to the concept of intelligence, whereas the expectation of the free energy is used to analyze the computational properties that emerge at a macroscopic scale as a result of the defined interactions between brain representations, providing, at the same time, a quantitative measure of the metabolic costs incurred by cognitive operations in the cerebral structures studied.
In summary, the hypothesis provides some answers to the efficiency of the brain’s computations and to the quantitative discontinuities in the cognitive capacities across species, simultaneously explaining the similarities in the intelligence levels that are observed.

2.2. The Energy Model

The structure of Figure 1a has three information-processing levels ( L 1 , L 2 , and L 3 ), the first level is the root of the tree which contains just one multi-state variable s 1 , 1 that can be in two states as indicated by the branching factor (i.e., the connectivity) n 1 that is equal to 2. The second level has two multi-state variables ( s 2 , 1 and s 2 , 2 ) and the number of states is equal to n 2 = 3 . Finally, the third level has 6 multi-state variables that can be found in four states n 3 = 4 . Each multistate variable s i j is denoted by two indexes, the first i denotes the information-processing level, and the second j its location within the level.
Each state is represented by an integer value that indicates its energy cost. The higher the energy of the state, the higher its energy value. The energy value is an integer that is interpreted as an energetic or metabolic cost. The resting state configuration has an energy value equal to 1 and is represented by the fixed point permutation, that is, the sub-index associated with any of its symbols is in concordance with its relative position (e.g., b 1 b 2 or c 1 c 2 c 3 ). Furthermore, the multi-state variables energy states are degenerate, as the number of symbol configurations is bigger than the cardinality of its set of states.
A n k = 1 k = 1 n k j = 2 k ( 1 ) j k ! j ! 1 < k n
The mechanism of assignment of symbol configurations (remember that symbols act as the embodiments of brain representations) is based on the idea that moving the system away from its resting state has an associated energy cost (see the right most part of Figure 1). The cost is defined in terms of the distance between the local configuration of the symbols and their corresponding resting states represented by the fixed point permutations. More specifically, the distance function accounts for the number of discrepancies of symbol positions with respect to the fixed point permutation. For example, for the configuration c 1 c 3 c 2 there are two discrepancies (symbol c 3 is in position 2, and symbol c 2 in position 3), and the corresponding energetic cost is 2. Similarly, there are also two discrepancies for the configuration c 2 c 1 c 3 , while there are three for c 3 c 1 c 2 , and the corresponding energetic cost is 3 (Figure 1b).
1 + k = 2 n A n k = 1 + k = 2 n n k A k k = n !
Equation (1) expresses the degeneracy associated to each energy state, that is, the number of symbol configurations leading to the same energetic cost for a subtree with n nodes (connectivity or branching factor). Specifically, the coefficient A n k accounts for the number of symbol’s configurations leading to the energy state k for a subtree of the structure with connectivity equal to n. The equality (2) holds for the coefficients A n k .
Bearing in mind that in statistical mechanics every microscopic configuration C, (i.e., symbol’s configurations in this case) is assigned a probability p ( C ) which depends on its energy H ( C ) , and is given by the Boltzmann-Gibbs distribution. The partition function is by definition the normalization factor of the distribution and can be interpreted as a generating function of energies:
Z = C e β H ( C ) = n N ( n ) e β n
The energy function (or Hamiltonian) H ( C ) is defined in terms of the particular arrangement of symbols over the entire structure. More specifically, as a function of the states associated to the multi-states variables belonging to each of the levels comprising the structure. Furthermore, the energy model is additive and its evidencing the knowledge that neural computations are limited by power consumption. Indeed, under the statistical mechanics formulation, the Boltzmann-Gibbs distribution tell us that the physical system prefer to visit low energy states more than high energy states.
Finally, it is important to emphasize that the energy model is grounded on the concept of movement primitives, that is, the underlying substrate common to any cognitive function according to the hypothesis. In other words, cognitive processes are learnt, stored, and recalled just like movement processes.

3. Results

The patterns of functional connectivity experimentally observed between cortical regions undergo spontaneous fluctuations and are highly responsive to perturbations (i.e., sensory input or cognitive task) on a time scale of hundreds of milliseconds. However, these rapid reconfigurations do not affect the stability of global topological characteristics. In other words, on longer time scales of seconds to minutes (i.e., in equilibrium from a statistical mechanics point of view), correlations between spontaneous fluctuations in brain activity form functional networks that are particularly robust [19].
The random structures associated to the cognitive architecture of the hypothesis model with a certain level of abstraction the patterns of functional connectivity that are observed between regions of the cerebral cortex and its homologues. However, it is important to emphasize that the theoretical model associated to the hypothesis [12,13] belong to the field of equilibrium statistical mechanics. Specifically, instead of studying the exact evolution in time of the symbolic structures associated to the cognitive architecture, average quantities were used since the focus was put on the typical behavior of the system (e.g., the average content of information of the symbolic structures) or under the presence of small fluctuations around its typical behavior.
Having said this, it is important to note that the definition of the baseline is done according to quantitative measurements of brain circulation and metabolism. More specifically, the physiological baseline of the brain is defined as the absence of activation, for example, lying quietly in a scanner with eyes closed, although awake [3]. In contrast, activation is defined in the context of functional imaging as an increase in blood flow that is not accompanied by a commensurate increase in oxygen consumption. As a result, less oxygen is extracted from the blood, leading to an increase in the local blood oxygen content in the brain. In other words, activation is equivalent to an engaged period of mental concentration, for example, solving arithmetical problems.
Therefore, it is plausible to assume that the baseline state represents an equilibrium state (or close to equilibrium) from a statistical mechanics point of view. Taking these considerations into account, in the following the emphasis is put on the analysis of the properties that are present with high regularity in the random structures of the model from an equilibrium point of view. The subtrees modeled as multi-state variables represent the fundamental building block of the random structures presented before, thus, the aforementioned analysis is simply a refined counting problem with two parameters, namely, the number of energy states associated to the subtree, and an additional characteristic represented in this case by the connectivity between brain representations (i.e., the number of childs associated to the root node of the subtree).

3.1. Most Probable State Analysis

Subtrees are the fundamental building block of the random structures associated to the cognitive architecture of the hypothesis. Taking into consideration that Equation (1) expresses the degeneracy associated to each energy state, that is, the number of symbol configurations leading to the same energetic cost for a subtree with n nodes (i.e., the connectivity between brain representations or branching factor) the following bivariate enumeration (see Appendix A for the formal definition):
A ( z , u ) = n 0 + k 0 + A n k u k z n n ! = e z ( 1 u ) 1 u z e z ( 1 u )
can be interpreted as a generating function of energies associated to any subtree of the random structures associated to the cognitive architecture. One of the principal goals of multivariate enumeration [20] is the quantification of properties present with high regularity in large random structures. More specifically, the complex variable z is attached to n representing the connectivity between brain representations (remember that symbols act as the embodiments of brain representations), whereas the complex variable u is attached to the variable k representing the energetic (or metabolic) cost of the state represented by the variable itself.
Moreover, the bivariate generating function (4) can be interpreted in terms of a probabilistic model (see the formal definition in Appendix A). Specifically, considering a combinatorial class A , if the probabilistic model is based on the uniform distribution then the uniform probability distribution over the counting sequence A n assigns to any α A n a probability equal to 1 / A n and is denoted as P A n (i.e., probability relative to the uniform distribution over A n ). The parameter χ determines over each A n a discrete random variable defined over the discrete probability space A n :
P A n ( χ = k ) = A n k A n = A n k k A n k
Taking into consideration the definition of probability generating function one has immediately (see also Appendix A for the notation and definitions):
k P A n ( χ = k ) u k = [ z n ] A ( z , u ) [ z n ] A ( z , 1 )
The connectivity between brain representations may be assumed that is uniformly distributed, however, it is important to emphasize that independently of the probabilistic model used, the bivariate generating function (4) permit us to calculate the factorial moments of the distribution associated to the probability distribution of the model. In particular, the first and second order moments reads respectively (see the formal definition in Appendix A):
E A n ( χ ) = [ z n ] δ A ( z , u ) δ u | u = 1 [ z n ] A ( z , 1 )
E A n ( χ 2 ) = [ z n ] δ 2 A ( z , u ) δ u | u = 1 [ z n ] A ( z , 1 ) + [ z n ] δ A ( z , u ) δ u | u = 1 [ z n ] A ( z , 1 )
Using expressions (7), and (8) the first and second order moments finally reads (see the Appendix A for the entire derivation):
E A n ( χ ) = [ z n ] e z + [ z n ] z 2 ( 1 z ) 2 [ z n ] 1 1 z = n 1
E A n ( χ 2 ) = [ z n ] 2 z 2 ( 1 z ) 3 [ z n ] 2 z 2 ( 1 z ) 2 + [ z n ] z 2 1 z [ z n ] 1 1 z + [ z n ] e z + [ z n ] z 2 ( 1 z ) 2 [ z n ] 1 1 z = = n 2 3 n + 3 + ( n 1 + 1 n ! ) 2
From the translation into the language of probability of the bivariate generating function (4) it can be deduced that the first moment of the distribution is providing information about the most probable state in which a subtree of size n can be found in equilibrium (or close to equilibrium) conditions according to the energy model of the hypothesis.
Thus, taking into consideration that the cardinality of the states is in direct correspondence with its associated energetic cost, the most probable state of a subtree in equilibrium conditions is a state of high energy, in other words, a metabolically demanding state according to the model.
The calculation of the most probable state for the Cayley-like structures of the cognitive architecture of the hypothesis is mathematically intractable, however, as a result of its hierarchical organization it is plausible to assume some sort of conditional independence properties between the probabilitiy distributions associated to the subtrees conforming its structure (i.e., they can be factorized) so as to state that in equilibrium conditions the most probable state of these structures is also a high energy state.
Finally, from the expressions of the factorial moments of order one and two the variance of the distribution reads:
V A n ( χ ) = E A n ( χ 2 ) E A n ( χ ) 2 = n 2 3 n + 3 ( n 1 + 1 n ! ) 2 1 n 1
which means that to the extent the connectivity between brain representations increases (i.e., when n increases) the dispersion of the distribution from its average value rapidly tends to 1. Specifically, the distribution is concentrated around its average value which means that in equilibrium conditions the probability mass is concentrated around the highest-energy states. Indeed, denoting as μ n = E A n ( χ ) and σ n = V A n ( χ ) the proof of the concentration of the distribution is straight forward calculating the following limit (see Definition A7 in Appendix A):
lim n + σ n μ n = lim n + 1 + 3 n ! 2 n n ! 1 ( n ! ) 2 n 1 + 1 n ! = 0

4. Discussion

The ability to utilize energy and to channel it into biological work is a fundamental property of all living organisms. This fundamental property must have been acquired very early in cellular evolution [6]. Particularly, neurons acquire free energy from nutrient molecules and transform this free energy into ATP and other energy-rich compounds capable of providing energy for biological work. From a thermodynamics point of view neurons are isothermal systems that function essentially at constant temperature and pressure. In every metabolic process that take place in neurons, there is a loss of useful energy (free energy) and an inevitable increase in the amount of unusable energy (heat and entropy). Neuronal computation is energetically expensive, and the brain’s limited energy supply imposes constraints on its information processing capability, however nature evolved brains are impressively energy efficient. Thus, neurons, neural codes, and neural circuits have evolved to reduce metabolic demands as it is currently commonly agreed [21,22,23].
The energy model of the hypothesis was aimed at capturing the energy efficiency considerations that appear to have shaped nature evolved brains but at the same time it is grounded on the concept of movement primitives [13]. In particular, they evidence the idea of an information-processing mechanism designed to combine discrete meaningful units of information (i.e., symbols in the cognitive architecture) into rule-conforming higher-order structures in a hierarchically organized information-processing system (Figure 1).
Surprisingly, according to the present study it has been shown that in equilibrium (or near equilibrium) conditions a physical system built according to these principles is found in a state of high energy consumption as it has been empirically reported for real brains. This apparent counterintuitive result can be explained if one takes into consideration that from a behavioral perspective the conditions elicited by the resting and/or passive states are representing states with lower probability with respect to the universe of discourse of states of the vertebrate brain’s, and perhaps this is the reason why the measured energy consumption when the brain is performing a cognitive task (i.e., when the system is out of equilibrium) is just a fraction of the energy consumption at rest since most of the time the brain is engaged in a wide range of active tasks and its efficiency has been evolutionary optimized to do so.
Summarizing, according to the present study the structured activity patterns that exist during passive task states are evidencing a fundamental or intrinsic property of functional brain organization that is the result of a system constrained by power consumption and evolutionary designed to minimize metabolic demands. Furthermore, this fundamental property that appears to be an evolutionary conserved aspect of brain functional organization that trascends levels of consciousness [11] would increase the likelihood of the movement primitives as the underlying substrate to such functional organization. It is important to remember that according to the phylogenetic conservation principle [24] many aspects of brain structure and function are conserved across species, and the degree of conservation is highest at the lowest levels of organization.

4.1. Implications for Neuroscience Research

Brain representations are physically encoded by neuron assemblies (i.e., as states of attractor networks), that is, a piece of information is represented as the simultaneous activity of a group of neurons, and their physical state transitions execute computations [12]. Furthermore, brain representations present a hierarchical organization, and they are distributed throughout the entire cerebral cortex (and its homologues in other vertebrate species). However, the whole thing is that as a result of the constraints imposed by the structural connectivity in the functional interactions, the topological parameters are generally conserved between structural and functional networks [19]. Thus, from the point of view of the hypothesis, a higher connectivity between brain representations is assumed to be accompanied not only of a higher structural connectivity but also of higher energetic requirements.
Moreover, higher-order representations include perception and action representations (i.e., lower-order representations), that is, sensory and motor systems are directly modulated by higher-order processing. For example, the prefrontal cortex is an area of upmost importance for human evolution because it mediates complex behaviours such as planning, working memory, memory for serial order and temporal information, aspects of language, and social information processing, thus making it a prominent region for encoding higher-order brain representations since it is the principal actor behind human intelligence. In contrast, cortical regions such as the primary auditory cortex, the primary visual cortex or the primary motor cortex are assumed to be regions encoding lower-order representations.
Alzheimer’s disease is characterised by loss of neurons and synapses in the cerebral cortex and certain subcortical regions resulting in a gross atrophy of the affected regions, including degeneration in the temporal lobe and parietal lobe, and parts of the frontal cortex and cingulate gyrus between others. Thus, taking into consideration that memory loss is the predominant symptom, it is plausible to hypothesize a reduced connectivity in the default network (or even a disruption of the connectivity between regions of this network depending on the stage of the disease), especially in those regions that are activated by episodic memory retrieval tasks (e.g., posterior cingulate cortex).
In summary, rest activity patterns may be central not only for understanding brain function but especially for understanding individual differences in functional anatomy, exploring functional homologies across species, and for a better comprehension of diseases of the mind.

5. Conclusions

Theoretical approaches to understanding the dynamics of neural activity during the resting state are believed to be of vital importance for understanding basic neurophysiological mechanism that operate in the baseline. In this paper, an interpretation of the high energy consumption of the brain at rest has been provided using the intelligence and embodiment hypothesis as the founding pillar. The structured activity patterns that exist during passive task states are shown to be evidencing a fundamental property of functional brain organization grounded on the concept of movement primitives albeit being the result of a system constrained by power consumption and evolutionary designed to minimize metabolic demands.
In summary, the most important implication of the present study is to put forward the importance of the information processing associated to the rest-state activity in ways not conventionally considered, thus paving the way for future research at other levels of neuroscience such as to understanding individual differences in functional anatomy, cognition, or the diseases of the mind.

Funding

This research received no external funding.

Conflicts of Interest

The author declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ATPAdenosine Triphosphate

Appendix A. Definitions and Notations

The goal of this appendix is to provide the definitions of the mathematical concepts and notations used throughout this paper together with the derivations of the most important mathematical expressions.
Definition A1. 
(Combinatorial class). A combinatorial class is a finite or denumerable set in which a size function is defined, satisfying the following conditions: (1) the size of an element is a non-negative integer. (2) The number of elements of any given size is finite. If A is a class, the size of an element α A is denoted as α .
Definition A2. 
(Counting sequence). The counting sequence of a combinatorial class A is the sequence of integers ( A n ) n 0 where A n is the number of objects in class A that have size n.
Definition A3. 
(Generating function). The generating function of a sequence A n is the formal power series, where w n = 1 for the ordinary case and w n = n ! for the exponential case.
A ( u ) = n 0 A n u n w n = α A u α w n
Definition A4. 
(Coefficient extraction). We generally let A n = w n [ u n ] A ( u ) denote the operation of extracting the coefficient of u n in the formal power series (A1).
Definition A5. 
(Bivariate generating function). Given a combinatorial class A , a (scalar) parameter is a function from A to Z 0 that associates to any object α A an integer value χ ( α ) the sequence:
A n k = c a r d α A | α | = n , χ ( α ) = k ,
is called the counting sequence of the pair A χ . The bivariate generating function of A χ is defined as:
A ( z , u ) = n 0 + k 0 + A n k u k z n ω n
and is ordinary if ω n = 1 and exponential if ω n = n ! . One says that the variable z marks size and the variable u marks the parameter χ.
Definition A6. 
(PGF’s from BGF’s). Let A ( z , u ) be the bivariate generating function of a parameter χ defined over a combinatorial class A . The probability generating function of χ over A n is given by:
k P A n ( χ = k ) u k = [ z n ] A ( z , u ) [ z n ] A ( z , 1 )
and is thus a normalized version of a horizontal generating function.
Definition A7. 
(Moments from BGF’s). The factorial moment of order r of a parameter χ is determined from the BGF A ( z , u ) by r fold differentiation followed by evaluation at 1:
E A n χ ( χ 1 ) ( χ r + 1 ) = [ z n ] δ u r A ( z , u ) | u = 1 [ z n ] A ( z , 1 )
Definition A8. 
(Concentration of distribution). Consider a family of random variables χ n , typically, a scalar parameter χ on the subclass A n . Assumes that the means μ n = E ( χ n ) and the standard deviations σ n = σ ( χ n ) satisfy the condition:
lim n + σ n μ n = 0
Then, as a direct consequence of the Chebyshev’s inequality, the distribution of χ n is concentrated in the sense that, for any ϵ > 0 there holds:
lim n + P 1 ϵ χ n μ n 1 + ϵ = 1

Appendix B. Derivation of Equations

The series expansion used to obtain the coefficients associated to the factorial moments calculated in Equations (6) and (7) are shown in the following:
p ( z ) = z j ( 1 z ) j = z j 1 + j z + j ( j + 1 ) 2 ! z 2 + j ( j + 1 ) ( j + 2 ) 3 ! z 3 + = = z j + j z j + 1 + j ( j + 1 ) 2 ! z j + 2 + + j ( j + 1 ) ( j + 2 ) ( j + k 1 ) ) k ! z j + k +
Thus, the coefficients associated to the expansion of the function p ( z ) take the form:
p n = [ z n ] p ( z ) = n 1 j 1
Similarly, defining the functions h ( z ) and g ( z ) , respectively as follows:
h ( z ) = δ A ( z , u ) δ u | u = 1 = e z + z 2 ( 1 z ) 2 = = 1 + z + 3 z 2 2 + 13 z 3 6 + 73 z 4 24 + 481 z 5 120 + 3601 z 6 720 + 30241 z 7 5040 +
g ( z ) = δ 2 A ( z , u ) δ u 2 | u = 1 = 2 z 2 ( 1 z ) 3 2 z 2 ( 1 z ) 2 + z 2 1 z = z 2 + 3 z 3 + 7 z 4 + 13 z 5 + 21 z 6 + 31 z 7 + 43 z 8 + 57 z 9 +
One finally gets that the coefficients h n and g n take the form:
h n = [ z n ] h ( z ) = n 1 + 1 n !
g n = [ z n ] g ( z ) = n 2 3 n + 3

References

  1. Boly, M.; Phillips, C.; Tshibanda, L.; Vanhaudenhuyse, A.; Schabus, M.; Dang-Vu, T.T.; Moonen, G.; Hustinx, R.; Maquet, P.; Laureys, S. Intrinsic Brain Activity in Altered States of Consciousness How Conscious is the Default Mode of Brain Function? Ann. N. Y. Acad. Sci. 2008, 1129, 119–129. [Google Scholar] [CrossRef] [PubMed]
  2. Buckner, R.L.; Vincent, J.L. Unrest at rest: Default activity and spontaneous network correlations. NeuroImage 2007, 37, 1091–1096. [Google Scholar] [CrossRef] [PubMed]
  3. Gusnard, D. A, Raichle, M.E. Searching for a baseline: Functional Imaging and the Resting Human Brain. Nat. Rev. Neurosci. 2001, 2, 685–694. [Google Scholar] [CrossRef] [PubMed]
  4. Morcom, A.M.; Fletcher, P.C. Does the brain have a baseline? Why we should be resisting a rest. NeuroImage 2007, 37, 1073–1082. [Google Scholar] [CrossRef] [PubMed]
  5. Shulman, R. G, Hyder, F.; Rothman D.L. Baseline brain energy supports the state of consciousness. Proc. Natl. Acad. Sci. USA 2009, 106, 11096–11101. [Google Scholar] [CrossRef] [PubMed]
  6. Nelson, D.L.; Cox, M.M. Lehninger Principles of Biochemistry, 6th ed.; Macmillan Learning Publishers: New York, NY, USA, 2012. [Google Scholar]
  7. Churchland, P.; Sejnowski, T. The Computational Brain; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  8. Mead, C. Analog VLSI and Neural Systems; Addison Wesley: Reading, MA, USA, 1989. [Google Scholar]
  9. Ingvar, D.H. “Memory of the future”: An essay on the temporal organization of conscious awareness. Hum. Neurobiol. 1985, 4, 127–136. [Google Scholar] [PubMed]
  10. Tulving, E. The Missing Link in Cognition: Origins of Self-Reflective Consciousness; Terrace, H.S., Ed.; Oxford University Pres: New York, NY, USA, 2005. [Google Scholar]
  11. Gusnard, D.A.; Raichle, M.E. Intrinsic functional architecture in the anesthetized monkey brain. Nature 2007, 447, 83–86. [Google Scholar]
  12. Chinea, A. Is Cetacean Intelligence Special? New Perspectives on the Debate. Entropy 2017, 19, 543. [Google Scholar] [CrossRef]
  13. Chinea, A.; Korutcheva, E. Intelligence and Embodiment: A Statistical Mechanics Approach. Neural Netw. 2013, 40, 52–72. [Google Scholar] [CrossRef] [PubMed]
  14. Chandler, D. Introduction to Modern Statistical Mechanics; Oxford University Press: Oxford, UK, 1987. [Google Scholar]
  15. Reichl, L.E. A Modern Course in Statistical Physics; John Wiley & Sons: New York, NY, USA, 1998. [Google Scholar]
  16. Hecht-Nielsen, R. Confabulation Theory: The Mechanism of Thought; Springer: Heidelberg, Germany, 2007. [Google Scholar]
  17. Taylor, P.; Hobbs, J.N.; Burroni, J.; Siegelmann, H. The global landscape of cognition:hierarchical aggregation as an organizational principle of human cortical networks and functions. Sci. Rep. 2015, 5, 18112. [Google Scholar] [CrossRef] [PubMed]
  18. Liu, Y.Y.; Slotine, J.J.; Barabasi, A.L. Controllability of complex networks. Nature 2011, 473, 167–173. [Google Scholar] [CrossRef] [PubMed]
  19. Bullmore, E.; Sporns, O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009, 10, 186–198. [Google Scholar] [CrossRef] [PubMed]
  20. Flajolet, P.; Sedgewick, R. Analytic Combinatorics; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  21. Harris, J.J.; Jolivet, R.; Engl, E.; Atwell, D. Energy-Efficient Information Transfer by Visual Pathway Synapses. Curr. Biol. 2015, 25, 3151–3160. [Google Scholar] [CrossRef] [PubMed]
  22. Laughlin, S.B.; de Ruyter van Steveninck, R.R.; Anderson, J.C. The metabolic cost of neural information. Nat. Neurosci. 1998, 1, 36–41. [Google Scholar] [CrossRef] [PubMed]
  23. Niven, J.E.; Laughlin, S.B. Energy imitation as selective pressure on the evolution of sensory systems. J. Exp. Biol. 2008, 211, 1792–1804. [Google Scholar] [CrossRef] [PubMed]
  24. Striedter, G.F. Principles of Brain Evolution; Sinauer Associates, Inc.: Sunderland, MA, USA, 2005. [Google Scholar]
Figure 1. (a) The cognitive architecture (Top part of the figure): Example of symbolic structure used to capture some of the information-processing characteristics of brain’s cognitive functions. The most important aspects of the intelligence and embodiment hypothesis [12,13] are synthesized in a space of cognitive architecture complexity that is defined by the average number of information-processing levels and the average connectivity between symbols. The nodes of the structure are representing symbols, where the symbols act as the embodiment of brain representations. The average number of information processing levels is hypothesized to be a function of the density of neurons and the number of physical laminae (or nuclei) that are present in the cerebral structures postulated by the hypothesis as mainly responsible of intelligence in macroscopic moving animals. In this example, the structure comprises three information-processing levels (depth of the structure) where the connectivity associated to each level is n 1 = 2 , n 2 = 3 , and n 3 = 4 respectively. The circular arrows represent the combinatorial operations (movement primitives) common to brain’s cognitive functions which are hypothesized to have been conserved through the evolutionary process. Each subtree of the structure is modeled in terms of multi-state variables. The connectivity between brain representations (i.e., the branching factor) determines the number of states of the multi-state variables. Similarly, the number of nodes belonging to each of the levels determines the number of subtrees (i.e, multi-state variables) belonging to each of the levels comprising the structure. (b) The energy model (Bottom left and Right part of the figure): The knowledge that computations performed by nervous systems are limited by power consumption is explicitly embedded within the energy model. Moving from one symbol permutation configuration to another has an associated energy cost that is proportional to the distance from a resting state configuration represented by the fixed-point permutation. It is important to remember that in Statistical Physics the Boltzmann-Gibbs distribution tell us that physical systems prefer to visit low energy states more than high energy states. The tables show the mapping between symbol configurations and energy states for permutations of size two and three respectively.
Figure 1. (a) The cognitive architecture (Top part of the figure): Example of symbolic structure used to capture some of the information-processing characteristics of brain’s cognitive functions. The most important aspects of the intelligence and embodiment hypothesis [12,13] are synthesized in a space of cognitive architecture complexity that is defined by the average number of information-processing levels and the average connectivity between symbols. The nodes of the structure are representing symbols, where the symbols act as the embodiment of brain representations. The average number of information processing levels is hypothesized to be a function of the density of neurons and the number of physical laminae (or nuclei) that are present in the cerebral structures postulated by the hypothesis as mainly responsible of intelligence in macroscopic moving animals. In this example, the structure comprises three information-processing levels (depth of the structure) where the connectivity associated to each level is n 1 = 2 , n 2 = 3 , and n 3 = 4 respectively. The circular arrows represent the combinatorial operations (movement primitives) common to brain’s cognitive functions which are hypothesized to have been conserved through the evolutionary process. Each subtree of the structure is modeled in terms of multi-state variables. The connectivity between brain representations (i.e., the branching factor) determines the number of states of the multi-state variables. Similarly, the number of nodes belonging to each of the levels determines the number of subtrees (i.e, multi-state variables) belonging to each of the levels comprising the structure. (b) The energy model (Bottom left and Right part of the figure): The knowledge that computations performed by nervous systems are limited by power consumption is explicitly embedded within the energy model. Moving from one symbol permutation configuration to another has an associated energy cost that is proportional to the distance from a resting state configuration represented by the fixed-point permutation. It is important to remember that in Statistical Physics the Boltzmann-Gibbs distribution tell us that physical systems prefer to visit low energy states more than high energy states. The tables show the mapping between symbol configurations and energy states for permutations of size two and three respectively.
Proceedings 46 00030 g001
Figure 2. (a) Functional aspects associated to the concept of movement primitives. The figure shows a very simple structure of four nodes where the upper level symbol denoted as a 1 (the node depicted in yellow) is expressed as a combination of the lower level symbols b 1 , b 2 , and b 3 (nodes depicted in red, cyan, and green respectively). The nodes b 1 , b 2 , and b 3 are modulated by the higher-order symbol a 1 . After one permutation operation carried out over the symbols b 2 and b 3 the symbol a 2 (the node depicted in violet) becomes activated. In other words the node a 1 acts as a driver-like node [18] in complex networks terminology. (b) How brain representations are structured according to the Intelligence and Embodiment hypothesis [12,13]. At the bottom of each symbolic structure a functionally equivalent representation is shown using square brackets to indicate that the content of higher-order representations (e.g., a semantic representation) include lower-order representations (e.g., sensorimotor representations). In this case, the driver-like nodes are those situated at the left of each openning square bracket.
Figure 2. (a) Functional aspects associated to the concept of movement primitives. The figure shows a very simple structure of four nodes where the upper level symbol denoted as a 1 (the node depicted in yellow) is expressed as a combination of the lower level symbols b 1 , b 2 , and b 3 (nodes depicted in red, cyan, and green respectively). The nodes b 1 , b 2 , and b 3 are modulated by the higher-order symbol a 1 . After one permutation operation carried out over the symbols b 2 and b 3 the symbol a 2 (the node depicted in violet) becomes activated. In other words the node a 1 acts as a driver-like node [18] in complex networks terminology. (b) How brain representations are structured according to the Intelligence and Embodiment hypothesis [12,13]. At the bottom of each symbolic structure a functionally equivalent representation is shown using square brackets to indicate that the content of higher-order representations (e.g., a semantic representation) include lower-order representations (e.g., sensorimotor representations). In this case, the driver-like nodes are those situated at the left of each openning square bracket.
Proceedings 46 00030 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

de Lara, A.C.M. Interpreting the High Energy Consumption of the Brain at Rest. Proceedings 2020, 46, 30. https://doi.org/10.3390/ecea-5-06694

AMA Style

de Lara ACM. Interpreting the High Energy Consumption of the Brain at Rest. Proceedings. 2020; 46(1):30. https://doi.org/10.3390/ecea-5-06694

Chicago/Turabian Style

de Lara, Alejandro Chinea Manrique. 2020. "Interpreting the High Energy Consumption of the Brain at Rest" Proceedings 46, no. 1: 30. https://doi.org/10.3390/ecea-5-06694

Article Metrics

Back to TopTop