Interpreting the High Energy Consumption of the Brain at Rest †

: The notion that the brain has a resting state mode of functioning has received increasing attention in recent years. The idea derives from experimental observations that showed a relatively spatially and temporally uniform high level of neuronal activity when no explicit task was being performed. Surprisingly, the total energy consumption supporting neuronal ﬁring in this conscious awake baseline state is orders of magnitude larger than the energy changes during stimulation. This paper presents a novel and counterintuitive explanation of the high energy consumption of the brain at rest obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge and to the efﬁciency of brain’s computations. The high energy consumption of the brain at rest is shown to be the result of the energetics associated to the most probable state of a statistical physics model aimed at capturing the behavior of a system constrained by power consumption and evolutionary designed to minimize metabolic demands.


Introduction
In the last few years, there has been a growing interest from the neuroscience community in understanding spontaneous brain activity and its relation to cognition and behavior [2]. The principal idea derived from functional brain imaging observations in humans that revealed a consistent network of brain regions showing high levels of activity when no explicit task was being performed and participants were are asked simply to rest. This network of regions active in rest states was denoted as default network and the activity within this network during rest states as default activity [3,10].
Furthermore, the claim on the importance of this brain "baseline" or "default" state was based on the fact that brain activity in this state (i.e., the default network) is quantitatively different from the states evoked by goal-oriented activity [16]. This state is characterized by a relatively spatially and temporally uniform high level of neuronal activity, in other words the default network is more active at rest than it is in a range of explicit tasks. However, the most striking result is the fact that the energy consumption supporting neuronal firing in the baseline state is one to two orders of magnitude larger than the energy changes during stimulation [20]. This fact is of particular interest taking into consideration that the computations performed by brains are limited by power consumption, and at the same time all the metabolic transformations that take place in the brain obey the laws of thermodynamics. The second law of thermodynamics, in particular, limits the thermodynamic efficiency with which information can be processed.
Having said this, it is important to emphasize that from a thermodynamics point of view neurons are isothermal systems that function essentially at constant temperature and pressure. In every metabolic process that take place in neurons, there is a loss of useful energy (free energy) and an inevitable increase in the amount of unusable energy (heat and entropy). Neurons acquire energy from nutrient molecules and transform this free energy into ATP and other energy-rich compounds capable of providing energy for biological work at constant temperature (i.e., neural computations) since heat flow is not a source of energy for neurons (i.e., heat can do work only as it passes to a zone or object at a lower temperature) [17]. Neuronal computation is energetically expensive. Consequently, the brain's limited energy supply imposes constraints on its information processing capability. Surprisingly, nature evolved brains are impressively efficient (e.g., the human brain is about 7 or 8 orders of magnitude more power efficient than the best of the silicon chips) [8]. A direct consequence of their energy efficiency is that brains can perform a huge amount of operations per second. For example, the brain of the common housefly, performs about 10 11 operations per second when merely resting [15].
The interpretation of the greater activity of the default network during passive tasks states (i.e., brain activity not clearly associated with any particular stimulus or behavior) has long been a challenge for neuroscientists. For example, the high energy consumption of the brain at rest has been correlated with the state of consciousness [20], neural resources dedicated to surviving future moments (i.e., consolidate the past, stabilize brain ensembles, and prepare us for the future) [3,13], or simply evidencing higher mental faculties that are unique to humans [23,24]. Nowadays, it is commonly believed that the baseline brain energy is evidencing a more fundamental or intrinsic property of functional brain organization that might help in understanding not only both normal and pathological brain function but also to explore differences in-between and accross species from a comparative anatomy point of view [3,24], however, this fundamental brain principle is still unknown.
In this paper, the focus is placed on the analysis of the high energy consumption of the brain at rest from the perspective of the intelligence and embodiment hypothesis [6,7] to reveal further insights that are of relevance to this intrinsic property of brain functional organization. The hypothesis postulates the existence of a common information-processing principle that is being exploited in naturally evolved nervous systems, and that this serves as the founding pillar of intelligence. The efficiency of brain computations is explained in terms of the concept of logical reversibility, that is, the idea of adiabatic computations since nature-evolved brains are intensively memory-based structures and memory is a necessary but not sufficient condition for the existence of logical reversibility. From an information-processing point of view, an operation is said to be logically reversible if its inputs can always be deduced from its outputs, in other words operations whose inverse is unique. A mathematical model grounded in Statistical Mechanics [5,19] that is suited for making quantitative predictions about the cognitive capacities of species for comparative purposes is also provided.
The rest of the paper is organized as follows: In the next section, the principal ideas behind the Intelligence and Embodiment hypothesis are reviewed. In turn, the energy model associated to the statistical mechanics model of the hypothesis is also reviewed. Specifically, the emphasis is put in two properties of the model: Firstly, the random structures associated to the cognitive architecture of the hypothesis model with a certain level of abstraction the patterns of functional connectivity that are observed between regions of the cerebral cortex and its homologues. Secondly, the observed energetic efficiency of nature evolved brains is explicitly captured in the energy model of the hypothesis. Afterwards, taking these considerations into account the properties of these random structures are studied using as parameter the connectivity between brain representations. This analysis in combination with basic probability inequalities is devoted to the precise characterization of properties of these random structures that hold with high probability such as their most probable state. Finally, a summary of the present study with some concluding remarks is presented.

Hypothesis Background
The intelligence and embodiment hypothesis [6,7] subscribes to the idea of cognition as an evolutionary adaptational redeployment of movement control [12]. This term is used to express, from an evolutionary perspective the idea that the extensive neuronal machinery developed to control animal movement was expanded to control new brain structures instead of muscles. This information-processing principle common to the nervous systems that evolved in nature is postulated to be the basis for the emergence of intelligence. Furthermore, the hypothesis associates this information-processing principle to the cerebral cortex and its homologues (i.e., functionally similar cerebral structures in other species), in other words, the neurobiological structures that emerged as a result of the aforementioned evolutionary adaptational redeployment of movement control.
From a computational point of view the hypothesis describes this information-processing principle in terms of combinatorial operations that are carried out over brain representations (Figures 1 and 2). Brain representations are assumed to be physically encoded by neuron assemblies (i.e., as states of attractor networks). In other words, a piece of information is represented as the simultaneous activity of a group of neurons [12]. This neural computational mechanism is denoted as movement primitives to emphasize the fact that is inherited from primitive animals as a result of the phylogenetic conservation principle. These movement primitives are postulated as the underlying substrate that is common to any cognitive function. They are proposed to put forward for consideration the idea of an information-processing mechanism that is designed to combine sensorimotor representations into rule-conforming higher-order brain representations. Such high-order or more complex representations are postulated to be the basis for context-independent representations in a hierarchically organized information-processing system.
A cognitive architecture is defined to capture the aforementioned principles (Figure 1), and also some aspects of the structural (e.g., physiologic) and functional (e.g., dynamic) characteristics of these cerebral structures with a certain level of abstraction. The cognitive architecture consists of a hierarchical symbol-based architecture of varying size and complexity to account for certain neurobiological differences between species, where the symbols act as embodiments of brain representations. These hierarchical symbol-based structures are used to account for certain neurobiological differences between species, and especially to model the fact that progressive functional abstraction over network depth is a fundamental feature of brains [22], that is, sensorimotor representations are assumed to be encoded by leaf symbols whereas deep knowledge representations are encoded by symbols belonging to levels close to the root of the tree. Furthermore, the interactions between symbols in the cognitive architecture are grounded in the concept of movement primitives. Specifically, they are modeled in terms of permutations operations between symbols (circular arrrows in Figures 1 and 2).
The hypothesis is accompanied with a mathematical model grounded in statistical mechanics that is used to analyze the information-processing capacities of the described architecture. The energy model used takes into consideration that computations performed by nervous systems are limited by power consumption and the fact that nature evolved brains are impressively efficient.Furthermore, the main innovative contribution of the model is the analysis of the cognitive architecture in terms of its free energy and entropy. The expectation of the entropy is used as an approximated measure to the concept of intelligence,whereas the expectation of the free energy is used to analyze the computational properties that emerge at a macroscopic scale as a result of the defined interactions between brain representations, providing, at the same time, a quantitative measure of the metabolic costs incurred by cognitive operations in the cerebral structures studied.
In summary, the hypothesis provides some answers to the efficiency of the brain's computations and to the quantitative discontinuities in the cognitive capacities across species, simultaneously explaining the similarities in the intelligence levels that are observed. : Example of symbolic structure used to capture some of the information-processing characteristics of brain's cognitive functions. The most important aspects of the intelligence and embodiment hypothesis [6,7] are synthesized in a space of cognitive architecture complexity that is defined by the average number of information-processing levels and the average connectivity between symbols. The nodes of the structure are representing symbols, where the symbols act as the embodiment of brain representations. The average number of information processing levels is hypothesized to be a function of the density of neurons and the number of physical laminae (or nuclei) that are present in the cerebral structures postulated by the hypothesis as mainly responsible of intelligence in macroscopic moving animals. In this example, the structure comprises three information-processing levels (depth of the structure) where the connectivity associated to each level is n 1 = 2, n 2 = 3, and n 3 = 4 respectively. The circular arrows represent the combinatorial operations (movement primitives) common to brain's cognitive functions which are hypothesized to have been conserved through the evolutionary process. Each subtree of the structure is modeled in terms of multi-state variables. The connectivity between brain representations (i.e., the branching factor) determines the number of states of the multi-state variables. Similarly, the number of nodes belonging to each of the levels determines the number of subtrees (i.e, multi-state variables) belonging to each of the levels comprising the structure. (b) The energy model (Bottom left and Right part of the figure): The knowledge that computations performed by nervous systems are limited by power consumption is explicitly embedded within the energy model. Moving from one symbol permutation configuration to another has an associated energy cost that is proportional to the distance from a resting state configuration represented by the fixed-point permutation. It is important to remember that in Statistical Physics the Boltzmann-Gibbs distribution tell us that physical systems prefer to visit low energy states more than high energy states. The tables show the mapping between symbol configurations and energy states for permutations of size two and three respectively. is expressed as a combination of the lower level symbols b 1 ,b 2 , and b 3 (nodes depicted in red, cyan, and green respectively). The nodes b 1 , b 2 , and b 3 are modulated by the higher-order symbol a 1 . After one permutation operation carried out over the symbols b 2 and b 3 the symbol a 2 (the node depicted in violet) becomes activated. In other words the node a 1 acts as a driver-like node [1] in complex networks terminology. (b) How brain representations are structured according to the Intelligence and Embodiment hypothesis [6,7]. At the bottom of each symbolic structure a functionally equivalent representation is shown using square brackets to indicate that the content of higher-order representations (e.g., a semantic representation) include lower-order representations (e.g., sensorimotor representations). In this case, the driver-like nodes are those situated at the left of each openning square bracket. .

The Energy Model
The structure of Figure 1a has three information-processing levels (L 1 , L 2 , and L 3 ), the first level is the root of the tree which contains just one multi-state variable s 1,1 that can be in two states as indicated by the branching factor (i.e., the connectivity) n 1 that is equal to 2. The second level has two multi-state variables (s 2,1 and s 2,2 ) and the number of states is equal to n 2 = 3. Finally, the third level has 6 multi-state variables that can be found in four states n 3 = 4. Each multistate variable s ij is denoted by two indexes, the first i denotes the information-processing level, and the second j its location within the level.
Each state is represented by an integer value that indicates its energy cost. The higher the energy of the state, the higher its energy value. The energy value is an integer that is interpreted as an energetic or metabolic cost. The resting state configuration has an energy value equal to 1 and is represented by the fixed point permutation, that is, the sub-index associated with any of its symbols is in concordance with its relative position (e.g., b 1 b 2 or c 1 c 2 c 3 ). Furthermore, the multi-state variables energy states are degenerate, as the number of symbol configurations is bigger than the cardinality of its set of states.
The mechanism of assignment of symbol configurations (remember that symbols act as the embodiments of brain representations) is based on the idea that moving the system away from its resting state has an associated energy cost (see the right most part of Figure 1). The cost is defined in terms of the distance between the local configuration of the symbols and their corresponding resting states represented by the fixed point permutations. More specifically, the distance function accounts for the number of discrepancies of symbol positions with respect to the fixed point permutation. For example, for the configuration c 1 c 3 c 2 there are two discrepancies (symbol c 3 is in position 2, and symbol c 2 in position 3), and the corresponding energetic cost is 2. Similarly, there are also two discrepancies for the configuration c 2 c 1 c 3 , while there are three for c 3 c 1 c 2 , and the corresponding energetic cost is 3 (Figure 1b).
Equation (1) expresses the degeneracy associated to each energy state, that is, the number of symbol configurations leading to the same energetic cost for a subtree with n nodes (connectivity or branching factor). Specifically, the coefficient A k n accounts for the number of symbol's configurations leading to the energy state k for a subtree of the structure with connectivity equal to n. The equality (2) holds for the coefficients A k n . Bearing in mind that in statistical mechanics every microscopic configuration C, (i.e., symbol's configurations in this case) is assigned a probability p(C) which depends on its energy H(C), and is given by the Boltzmann-Gibbs distribution. The partition function is by definition the normalization factor of the distribution and can be interpreted as a generating function of energies: The energy function (or Hamiltonian) H(C) is defined in terms of the particular arrangement of symbols over the entire structure. More specifically, as a function of the states associated to the multi-states variables belonging to each of the levels comprising the structure. Furthermore, the energy model is additive and its evidencing the knowledge that neural computations are limited by power consumption. Indeed, under the statistical mechanics formulation, the Boltzmann-Gibbs distribution tell us that the physical system prefer to visit low energy states more than high energy states.
Finally, it is important to emphasize that the energy model is grounded on the concept of movement primitives, that is, the underlying substrate common to any cognitive function according to the hypothesis. In other words, cognitive processes are learnt,stored, and recalled just like movement processes.

Results
The patterns of functional connectivity experimentally observed between cortical regions undergo spontaneous fluctuations and are highly responsive to perturbations (i.e., sensory input or cognitive task) on a time scale of hundreds of milliseconds. However, these rapid reconfigurations do not affect the stability of global topological characteristics. In other words, on longer time scales of seconds to minutes (i.e., in equilibrium from a statistical mechanics point of view), correlations between spontaneous fluctuations in brain activity form functional networks that are particularly robust [4].
The random structures associated to the cognitive architecture of the hypothesis model with a certain level of abstraction the patterns of functional connectivity that are observed between regions of the cerebral cortex and its homologues. However, it is important to emphasize that the theoretical model associated to the hypothesis [6,7] belong to the field of equilibrium statistical mechanics. Specifically, instead of studying the exact evolution in time of the symbolic structures associated to the cognitive architecture, average quantities were used since the focus was put on the typical behavior of the system (e.g., the average content of information of the symbolic structures) or under the presence of small fluctuations around its typical behavior.
Having said this, it is important to note that the definition of the baseline is done according to quantitative measurements of brain circulation and metabolism. More specifically, the physiological baseline of the brain is defined as the absence of activation, for example, lying quietly in a scanner with eyes closed, although awake [10]. In contrast, activation is defined in the context of functional imaging as an increase in blood flow that is not accompanied by a commensurate increase in oxygen consumption. As a result, less oxygen is extracted from the blood, leading to an increase in the local blood oxygen content in the brain. In other words, activation is equivalent to an engaged period of mental concentration, for example, solving arithmetical problems.
Therefore, it is plausible to assume that the baseline state represents an equilibrium state (or close to equilibrium) from a statistical mechanics point of view. Taking these considerations into account, in the following the emphasis is put on the analysis of the properties that are present with high regularity in the random structures of the model from an equilibrium point of view. The subtrees modeled as multi-state variables represent the fundamental building block of the random structures presented before, thus, the aforementioned analysis is simply a refined counting problem with two parameters, namely, the number of energy states associated to the subtree, and an additional characteristic represented in this case by the connectivity between brain representations (i.e., the number of childs associated to the root node of the subtree).

Most Probable State Analysis
Subtrees are the fundamental building block of the random structures associated to the cognitive architecture of the hypothesis. Taking into consideration that Equation (1) expresses the degeneracy associated to each energy state, that is, the number of symbol configurations leading to the same energetic cost for a subtree with n nodes (i.e., the connectivity between brain representations or branching factor) the following bivariate enumeration (see Appendix A for the formal definition): can be interpreted as a generating function of energies associated to any subtree of the random structures associated to the cognitive architecture. One of the principal goals of multivariate enumeration [9] is the quantification of properties present with high regularity in large random structures. More specifically, the complex variable z is attached to n representing the connectivity between brain representations (remember that symbols act as the embodiments of brain representations), whereas the complex variable u is attached to the variable k representing the energetic (or metabolic) cost of the state represented by the variable itself. Moreover, the bivariate generating function (4) can be interpreted in terms of a probabilistic model (see the formal definition in Appendix A). Specifically, considering a combinatorial class A , if the probabilistic model is based on the uniform distribution then the uniform probability distribution over the counting sequence A n assigns to any α ∈ A n a probability equal to 1/A n and is denoted as P A n (i.e., probability relative to the uniform distribution over A n ). The parameter χ determines over each A n a discrete random variable defined over the discrete probability space A n : Taking into consideration the definition of probability generating function one has immediately (see also Appendix A for the notation and definitions): The connectivity between brain representations may be assumed that is uniformly distributed, however, it is important to emphasize that independently of the probabilistic model used, the bivariate generating function (4) permit us to calculate the factorial moments of the distribution associated to the probability distribution of the model. In particular, the first and second order moments reads respectively (see the formal definition in Appendix A) : Using expressions (7), and (8) the first and second order moments finally reads (see the Appendix A for the entire derivation): From the translation into the language of probability of the bivariate generating function (4) it can be deduced that the first moment of the distribution is providing information about the most probable state in which a subtree of size n can be found in equilibrium (or close to equilibrium) conditions according to the energy model of the hypothesis.
Thus, taking into consideration that the cardinality of the states is in direct correspondence with its associated energetic cost, the most probable state of a subtree in equilibrium conditions is a state of high energy, in other words, a metabolically demanding state according to the model.
The calculation of the most probable state for the Cayley-like structures of the cognitive architecture of the hypothesis is mathematically intractable, however, as a result of its hierarchical organization it is plausible to assume some sort of conditional independence properties between the probabilitiy distributions associated to the subtrees conforming its structure (i.e., they can be factorized) so as to state that in equilibrium conditions the most probable state of these structures is also a high energy state.
Finally, from the expressions of the factorial moments of order one and two the variance of the distribution reads: which means that to the extent the connectivity between brain representations increases (i.e., when n increases) the dispersion of the distribution from its average value rapidly tends to 1. Specifically, the distribution is concentrated around its average value which means that in equilibrium conditions the probability mass is concentrated around the highest-energy states. Indeed, denoting as µ n = E A n (χ) and σ n = V A n (χ) the proof of the concentration of the distribution is straight forward calculating the following limit (see Definition A7 in Appendix A):

Discussion
The ability to utilize energy and to channel it into biological work is a fundamental property of all living organisms. This fundamental property must have been acquired very early in cellular evolution [17]. Particularly, neurons acquire free energy from nutrient molecules and transform this free energy into ATP and other energy-rich compounds capable of providing energy for biological work. From a thermodynamics point of view neurons are isothermal systems that function essentially at constant temperature and pressure. In every metabolic process that take place in neurons, there is a loss of useful energy (free energy) and an inevitable increase in the amount of unusable energy (heat and entropy). Neuronal computation is energetically expensive, and the brain's limited energy supply imposes constraints on its information processing capability, however nature evolved brains are impressively energy efficient. Thus, neurons, neural codes, and neural circuits have evolved to reduce metabolic demands as it is currently commonly agreed [11,14,18].
The energy model of the hypothesis was aimed at capturing the energy efficiency considerations that appear to have shaped nature evolved brains but at the same time it is grounded on the concept of movement primitives [7]. In particular, they evidence the idea of an information-processing mechanism designed to combine discrete meaningful units of information (i.e., symbols in the cognitive architecture) into rule-conforming higher-order structures in a hierarchically organized information-processing system ( Figure 1).
Surprisingly, according to the present study it has been shown that in equilibrium (or near equilibrium) conditions a physical system built according to these principles is found in a state of high energy consumption as it has been empirically reported for real brains. This apparent counterintuitive result can be explained if one takes into consideration that from a behavioral perspective the conditions elicited by the resting and/or passive states are representing states with lower probability with respect to the universe of discourse of states of the vertebrate brain's, and perhaps this is the reason why the measured energy consumption when the brain is performing a cognitive task (i.e., when the system is out of equilibrium) is just a fraction of the energy consumption at rest since most of the time the brain is engaged in a wide range of active tasks and its efficiency has been evolutionary optimized to do so.
Summarizing, according to the present study the structured activity patterns that exist during passive task states are evidencing a fundamental or intrinsic property of functional brain organization that is the result of a system constrained by power consumption and evolutionary designed to minimize metabolic demands. Furthermore, this fundamental property that appears to be an evolutionary conserved aspect of brain functional organization that trascends levels of consciousness [24] would increase the likelihood of the movement primitives as the underlying substrate to such functional organization. It is important to remember that according to the phylogenetic conservation principle [21] many aspects of brain structure and function are conserved across species, and the degree of conservation is highest at the lowest levels of organization.

Implications for Neuroscience Research
Brain representations are physically encoded by neuron assemblies (i.e., as states of attractor networks), that is, a piece of information is represented as the simultaneous activity of a group of neurons, and their physical state transitions execute computations [6]. Furthermore, brain representations present a hierarchical organization, and they are distributed throughout the entire cerebral cortex (and its homologues in other vertebrate species). However, the whole thing is that as a result of the constraints imposed by the structural connectivity in the functional interactions, the topological parameters are generally conserved between structural and functional networks [4]. Thus, from the point of view of the hypothesis, a higher connectivity between brain representations is assumed to be accompanied not only of a higher structural connectivity but also of higher energetic requirements.
Moreover, higher-order representations include perception and action representations (i.e., lower-order representations), that is, sensory and motor systems are directly modulated by higher-order processing. For example, the prefrontal cortex is an area of upmost importance for human evolution because it mediates complex behaviours such as planning, working memory, memory for serial order and temporal information, aspects of language, and social information processing, thus making it a prominent region for encoding higher-order brain representations since it is the principal actor behind human intelligence. In contrast, cortical regions such as the primary auditory cortex, the primary visual cortex or the primary motor cortex are assumed to be regions encoding lower-order representations.
Alzheimer's disease is characterised by loss of neurons and synapses in the cerebral cortex and certain subcortical regions resulting in a gross atrophy of the affected regions, including degeneration in the temporal lobe and parietal lobe, and parts of the frontal cortex and cingulate gyrus between others. Thus, taking into consideration that memory loss is the predominant symptom, it is plausible to hypothesize a reduced connectivity in the default network (or even a disruption of the connectivity between regions of this network depending on the stage of the disease), especially in those regions that are activated by episodic memory retrieval tasks (e.g., posterior cingulate cortex).
In summary, rest activity patterns may be central not only for understanding brain function but especially for understanding individual differences in functional anatomy, exploring functional homologies across species, and for a better comprehension of diseases of the mind.

Conclusions
Theoretical approaches to understanding the dynamics of neural activity during the resting state are believed to be of vital importance for understanding basic neurophysiological mechanism that operate in the baseline. In this paper, an interpretation of the high energy consumption of the brain at rest has been provided using the intelligence and embodiment hypothesis as the founding pillar. The structured activity patterns that exist during passive task states are shown to be evidencing a fundamental property of functional brain organization grounded on the concept of movement primitives albeit being the result of a system constrained by power consumption and evolutionary designed to minimize metabolic demands.
In summary, the most important implication of the present study is to put forward the importance of the information processing associated to the rest-state activity in ways not conventionally considered, thus paving the way for future research at other levels of neuroscience such as to understanding individual differences in functional anatomy, cognition, or the diseases of the mind.

Abbreviations
The following abbreviations are used in this manuscript:

Appendix A. Definitions and Notations
The goal of this appendix is to provide the definitions of the mathematical concepts and notations used throughout this paper together with the derivations of the most important mathematical expressions. Definition A1. (Combinatorial class). A combinatorial class is a finite or denumerable set in which a size function is defined, satisfying the following conditions: (1) the size of an element is a non-negative integer. (2) The number of elements of any given size is finite. If A is a class, the size of an element α ∈ A is denoted as |α|.
Definition A2. (Counting sequence). The counting sequence of a combinatorial class A is the sequence of integers (A n ) n≥0 where A n is the number of objects in class A that have size n.
Definition A3. (Generating function). The generating function of a sequence A n is the formal power series, where w n = 1 for the ordinary case and w n = n! for the exponential case. Definition A5. (Bivariate generating function). Given a combinatorial class A, a (scalar) parameter is a function from A to Z ≥0 that associates to any object α ∈ A an integer value χ(α) the sequence: is called the counting sequence of the pair A χ . The bivariate generating function of A χ is defined as: and is ordinary if ω n = 1 and exponential if ω n = n!. One says that the variable z marks size and the variable u marks the parameter χ.
Definition A6. (PGF's from BGF's). Let A(z, u) be the bivariate generating function of a parameter χ defined over a combinatorial class A. The probability generating function of χ over A n is given by: and is thus a normalized version of a horizontal generating function.
Definition A7. (Moments from BGF's). The factorial moment of order r of a parameter χ is determined from the BGF A(z, u) by r−fold differentiation followed by evaluation at 1: Definition A8. (Concentration of distribution). Consider a family of random variables χ n , typically, a scalar parameter χ on the subclass A n . Assumes that the means µ n = E(χ n ) and the standard deviations σ n = σ(χ n ) satisfy the condition: lim n→+∞ σ n µ n = 0 (A6) Then, as a direct consequence of the Chebyshev's inequality, the distribution of χ n is concentrated in the sense that, for any > 0 there holds: lim n→+∞ P 1 − ≤ χ n µ n ≤ 1 + = 1 (A7)