Integrated Information in the Spiking–Bursting Stochastic Model

Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.


Introduction
Integrated information (II) [1][2][3][4] is a measure of internal information exchange in complex systems and it has recently attracted a lot of interest, because initially it was proposed to quantify

Definition of II Measures in Use
The empirical "whole minus sum" version of II is formulated according to [13] as follows. Consider a stationary stochastic process ξ(t) (binary vector process), whose instantaneous state is described by N binary digits (bits), each identified with a node of the network (neuron). The full set of N nodes ("system") can be split into two non-overlapping non-empty subsets ("subsystems") A and B; such a splitting is referred to as bipartition AB. Denote by x = ξ(t) and y = ξ(t + τ) two states of the process separated by a specified time interval τ = 0. The states of the subsystems are denoted as x A , x B , y A , y B .
Mutual information between x and y is defined as where is entropy (base 2 logarithm gives result in bits); summation is hereinafter assumed to be taken over the whole range of the index variable (here x), H y = H x , due to assumed stationarity. Next, a bipartition AB is considered, and "effective information" Φ eff as a function of the particular bipartition is defined as Finally, "whole minus sum" II denoted as Φ is defined as effective information calculated for a specific bipartition AB MIB ("minimum information bipartition") which minimizes specifically normalized effective information: Note that this definition prohibits positive II, whenever Φ eff turns out to be zero or negative for at least one bipartition AB.
We compare the result of the "whole minus sum" effective information (3) to the "decoder based" information measure Φ * , which is modified from its original formulation of [16] by setting the logarithms base to 2 for consistency: where

Spiking-Bursting Stochastic Model
Physiologically, spikes are short (about 1 millisecond in duration) pulses of voltage (action potential) across the neuronal membrane. Bursts are rapid sequences of spikes. The main feature of the neuron-astrocyte network model in [19] is the presence of network-wide coordinated bursts, when all neurons are rapidly spiking in the same time window. Such bursts are coordinated by the astrocytic network and occur on the background of weakly correlated spiking activity of individual neurons. The spiking-bursting model was suggested in [19] as the simplest mathematical description of this behavior. In this model, time is discretized into small bins, and neurons are represented by binary digits taking on values 0 or 1, denoting the absence or the presence of at least one spike within the specific time bin. Respectively, a network-wide burst is represented by a time interval during which all neurons are locked at value 1 (which corresponds to a train of rapid spiking in the underlying biological system). The idea behind the model is illustrated by the graphical representation of its typical time evolution, as shown in Figure 1. The graphs of the model dynamics can be seen as envelopes of respective time recordings of membrane voltage in actual neurons: each short rectangular pulse of the model is assumed to correspond to at least one narrow spike of voltage, and a prolonged pulse (several discrete time bins in duration) represents a spike train (burst). Mathematically, this "spiking-bursting" model is a stochastic model, which produces a binary vector valued, discrete-time stochastic process. In keeping with [19], the model is defined as a combination M = {V, S} of a time-correlated dichotomous component V which turns on and off system-wide bursting (that mimics global bursting of a neuronal network, when each neuron produces a train of pulses at a high rate [19]), and a time-uncorrelated component S describing spontaneous (spiking) activity (corresponding to a background random activity in a neural network characterized by relatively sparse random appearance of neuronal pulses-spikes [19]) occurring in the absence of a burst. The model mimics the spiking-bursting type of activity which occurs in a neuro-astrocytic network, where the neural subsystem normally exhibits time-uncorrelated patterns of spiking activity, and all neurons are under the common influence of the astrocytic subsystem, which is modeled by the dichotomous component V and sporadically induces simultaneous bursting in all neurons. A similar network architecture with a "master node" spreading its influence on subordinated nodes was considered, for example, in [1] (Figure 4b therein).
The model is defined as follows. At each instance of (discrete) time the state of the dichotomous component can be either "bursting" with probability p b , or "spontaneous" (or "spiking") with probability p s = 1 − p b . While in the bursting mode, the instantaneous state of the resulting process x = ξ(t) is given by all ones: x = 11..1 (further abbreviated as x = 1). In cases of spiking, the state x is a (time-uncorrelated) random variate, which is described by a discrete probability distribution s x (where an occurrence of "1" in any bit is referred to as a "spike"), so that the resulting one-time state probabilities read where s 1 is the probability of spontaneous occurrence of x = 1 (hereafter referred to as a system-wide simultaneous spike) in the absence of a burst. (In a real network, "simultaneous" implies occurring within the same time discretization bin [19]). To describe two-time joint probabilities for x = ξ(t) and y = ξ(t + τ), consider a joint state xy which is a concatenation of bits in x and y. The spontaneous activity is assumed to be uncorrelated in time, which leads to the factorization s xy = s x s y .
The time correlations of the dichotomous component are described by a 2 × 2 matrix p q∈{s,b},r∈{s,b} = p ss p sb p bs p bb (8) whose components are joint probabilities to observe the respective spiking (index "s") and/or bursting (index "b") states in x and y. (In a neural network these correlations are conditioned by burst duration [19]; e.g., if this (in general, random) duration mostly exceeds τ, then the correlation is positive.) The probabilities obey p sb = p bs (due to stationarity), p b = p bb + p sb , p s = p ss + p sb , thereby allowing one to express all one-and two-time probabilities describing the dichotomous component in terms of two independent quantities, which for example, can be a pair {p s , p ss }; then p sb = p bs = p s − p ss , (9a) or {p b , ρ} as in [19], where ρ is the Pearson correlation coefficient defined by In Section 4 we justify the use of another effective parameter (13) instead of ρ to determine time correlations in the dichotomous component. The two-time joint probabilities for the resulting process are then expressed as p(x = 1, y = 1) = p 11 , (11c) π = p ss s 1 + p sb , p 11 = p ss s 2 1 + 2p sb s 1 + p bb . (11d) Note that the above notation can be applied to any subsystem instead of the whole system (with the same dichotomous component, as it is system-wide anyway).
The mentioned probabilities can be interpreted in terms of the underlying biological system as follows (see details in [19]): p b is the probability of observing the astrocytic subsystem in the excited (high calcium concentration) state, which induces global bursting activity in all neurons, within a specific time discretization bin; p bb is the probability of observing the mentioned state in two time bins separated by the time lag τ, and ρ is the respective time-delayed Pearson correlation coefficient of the astrocytic activity; s x is the probability of observing a specific spatial pattern of spiking x within one time bin in spontaneous neuronal activity (in the absence of an astrocyte-induced burst), and in particular s 1 is the probability that all neurons fire a spike within one time bin in spontaneous activity. In this sense s 1 measures the overall strength of spontaneous activity of the neuronal subsystem. When spiking activity is independent across neurons, the set of parameters {s 1 , p b , ρ} fully determines the "whole minus sum" II in the spiking-bursting model. In [19] these parameters were fitted to match (in the least-squares sense) the two-time probability distribution (11) to the respective "empirical" (numerically obtained) probabilities for the biologically realistic model of the neuron-astrocyte network. This fitting produced the dependence of the spiking-bursting parameters {s 1 , p b , ρ} upon the biological parameters; see Figure 7 in [19].

Model Parameter Scaling
The spiking-bursting stochastic model, as described in Section 3, is redundant in the following sense. In terms of the model definition, there are two distinct states of the model which equally lead to observing the same one-time state of the resultant process with 1s in all bits: firstly-a burst, and secondly-a system-wide simultaneous spike in the absence of a burst, which are indistinguishable by one-time observations. Two-time observations reveal a difference between system-wide spikes on one hand and bursts on the other, because the latter are assumed to be correlated in time, unlike the former. That said, the "labeling" of bursts versus system-wide spikes exists in the model (by the state of the dichotomous component), but not in the realizations. Proceeding from the realizations, it must be possible to relabel a certain fraction of system-wide spikes into bursts (more precisely, into a time-uncorrelated portion thereof). Such relabeling would change both components of the model {V, S} (dichotomous and spiking processes), in particular, diluting the time correlations of bursts, without changing the actual realizations of the resultant process. This implies the existence of a transformation of model parameters which keeps realizations (i.e., the stochastic process as such) invariant. The derivation of this transformation is presented in Appendix A and leads to the following scaling.
where α is a positive scaling parameter, and all other probabilities are updated according to Equation (9). The mentioned invariance in particular implies that any characteristic of the process must be invariant to the scaling (12a-d). This suggests a natural choice of a scaling-invariant effective parameter defined by p ss = p 2 s (1 + ) (13) to determine time correlations in the dichotomous component. In conjunction with a second independent parameter of the dichotomous process, for which a straightforward choice is p s , and with full one-time probability table for spontaneous activity s x , these constitute a natural full set of model parameters {s x , p s , }. The two-time probability table (8) can be expressed in terms of p s and by substituting Equation (13) into Equation (9): The requirement of non-negativeness of probabilities imposes simultaneous constraints and or equivalently, Comparing the off-diagonal term p sb in (14) to the definition of the Pearson's correlation coefficient thus, the sign of has the same meaning as that of ρ. Hereinafter we limit ourselves to non-negative correlations ≥ 0.

Analysis of the Empirical "Whole Minus Sum" Measure for the Spiking-Bursting Process
In this Section we analyze the behavior of the "whole minus sum" empirical II [13] defined by Equations (3) and (4) for the spiking-bursting model in dependence of the model parameters, particularly focusing on its transition from negative to positive values.

Expressing the "Whole Minus Sum" Information
Mutual information I xy for two time instances x and y of the spiking-bursting process is expressed by inserting all one-and two-time probabilities of the process according to (6), (11) into the definition (1), (2). The full derivation is given in Appendix B and leads to an expression which was used in [19] where we denote for compactness {q} = −q log 2 q.
We exclude from further consideration the following degenerate cases which automatically give I xy = 0 by definition (1): where the former two correspond to a deterministic "always 1" state for which all entropies in (1) are zero, and the latter two produce no predictability, which implies H xy = H x + H y . The particular case s 1 = 0 in (18) reduces to which coincides with mutual information for the dichotomous component taken alone and can be seen as a function of just two independent parameters of the dichotomous component, for which we chose p s and as suggested in Section 4. Using the expressions for the two-time probabilities (14), we rewrite (21a) in the form Expression (21b) explicitly defines a function I 0 (p s , ), which turns out to be a universal function allowing one to express mutual information (18) and effective information (3) in terms of the model parameters, as we show below. Typical plots of I 0 (p s , ) versus p s at several fixed values of are shown with blue solid lines in Figure 2. The formula (18) can be recovered back from (21a,b) by virtue of the scaling (12a-d), by assuming s 1 = 0 in (12b) and substituting the corresponding scaled value p s = (1 − s 1 )p s as per (12c) in place of the first argument of function I 0 (p s , ) defined in (21b), while parameter remains invariant to the scaling. This produces a simplified expression which is exactly equivalent to (18) for any s 1 . We emphasize that hereinafter expressions containing I 0 (·, ·)- (22), (23), (30b), etc.-imply that p s in (21b) must be substituted with the actual first argument of I 0 (·, ·), e.g., by (1 − s 1 )p s in (22). The same applies when the approximate expression for I 0 (·) (35) is used. Given a bipartition AB (see Section 2), this result is applicable as well to any subsystem A (B), with s 1 replaced by s A (s B ) which denote the probability of a subsystem-wide simultaneous spike in the absence of a burst, and with same parameters of the dichotomous component (here p s , ). Then effective information (3) is expressed as Hereafter in this section we assume the independence of spontaneous activity across the network nodes (neurons), which implies where Essentially, according to (25a,b), the function f (s) shows the dependence of effective information Φ eff upon the choice of the bipartition, which is characterized by the value of s A = s (if A is any non-empty subsystem, then s A is defined as the probability of spontaneous occurrence of 1s in all bits in A in the same instance of the discrete time), while the function parameter s 1 determines the intensity of spontaneous spiking activity. Note that the function I 0 (·, ·) in (21b) is defined only when the first argument is in the range (0, 1); thus, the definition domain of f (s) in (25b) is

Determining the Sign of the "Whole Minus Sum" Information
According to (4), the necessary and sufficient condition for the "whole minus sum" empirical II to be positive is the requirement that Φ eff be positive for any bipartition AB. Due to (25a,b), this requirement can be written in the form where {s A } is the set of s A values for all possible bipartitions AB.
Expanding the set of s in (27) to the whole definition domain of f (s) (26) leads to a sufficient (generally, stronger) condition for positive II f (s) > 0 for all s ∈ (s 1 , 1).
Note that f (s) by definition (25b) satisfies f (s = s 1 ) = f (s = 1) = 0, f (s = s 1 ) > 0 and (due to the invariance to mutual renaming of subsystems A and B) f (s 1 /s) = f (s). (All mentioned properties and subsequent reasoning can be observed in Figure 3, which shows a few sample plots of f (s)). The latter symmetry implies that the quantity of extrema of f (s) on s ∈ (s 1 , 1) must be odd, one of them always being at s = √ s 1 . If the latter is the only extremum, then it is a positive maximum, and (28) is thus fulfilled automatically. In case of three extrema, f ( √ s 1 ) is a minimum, which can change sign. In both these cases the condition (28) is equivalent to the requirement which can be rewritten as where The reasoning above essentially reduces the problem of determining the sign of II to determining the sign of the extremum f ( √ s 1 ).
The equivalence of (29) to (28) could be broken if f (s) had five or more extrema. As suggested by the numerical calculation on a grid of p s ∈ [0.01, 0.99] and ρ ∈ [0.01, 1], both with step 0.01, this exception never holds, although we did not prove this rigorously. Based on the reasoning above, in the following we assume the equivalence of (29) (and (30)) to (28).
A typical scenario of transformations of f (s) with the change of s 1 is shown in Figure 3.
Here the extremum f ( √ s 1 ) (shown with a dot) transforms with the decrease of s 1 from a positive maximum into a minimum, which in turn decreases from positive through zero into negative values. Note that by construction, the function g(s 1 ) defined in (30b) expresses effective information Φ eff from (3) for the hypothetic bipartition characterized by s A = s B = √ s 1 , which may or may not exist in the actual particular system. If such "symmetric" bipartition exists, then the value s A = √ s 1 belongs to the set {s A } in (27), which implies that (29) (same as (30)) is equivalent not only to (28), but also to the necessary and sufficient condition (27). Otherwise, (28) (equivalently, (29) or (30) Except for the degenerate cases (20), g(s 1 ) is negative at s 1 = 0 and has a limit g(s 1 → 1 − 0) → +0 ( −0 and +0 denote the left and right one-sided limits), because lim s 1 →1−0 hence, g(s 1 ) changes sign at least once on s 1 ∈ (0, 1). According to numerical evidence, we assume that g(s 1 ) changes sign exactly once on (0, 1) without providing a rigorous proof for the latter statement (it was confirmed up to machine precision for each combination of p s ∈ [0.01, 0.99] and ρ ∈ [0.01, 1], both with step 0.01; also note that for the asymptotic case (38) this statement is rigorous). In line with the above, the solution to (30a) has the form where s min 1 (p s , ) is the unique root of g(s 1 ) on (0, 1). Several plots of s min 1 (p s , ) versus p s at fixed and versus at p s fixed, which are obtained by numerically solving for the zero of g(s 1 ), are shown in Figure 4 with blue solid lines.
This result identifies a region in the parameter space of the model, where the "whole minus sum" information is positive. From the viewpoint of the underlying biological system, the quantity s min 1 determines the minimal sufficient intensity of spontaneous neuronal spiking activity for positive II. According to the result in Figure 4, within the assumption of independent spiking across the network (24), values s 1 0.17 lead to positive II regardless of other parameter values, and this threshold decreases further when p s is increased, which implies decreasing the frequency of occurrence of astrocyte-activated global bursting p b = 1 − p s .   (16), at p s = 0.5, 0.6, 0.7 (from top to bottom). Vertical position of red dashed lines is the result of (38), horizontal span denotes the estimated applicability range (36b).

Asymptotics for Weak Correlations in Time
Further insight into the dependence of mutual information I xy (and, consequently, of Φ eff and II) upon parameters can be obtained by expanding the definition of I 0 (p s , ) in (21b) in powers of (limit of weak correlations in time), which yields Estimating the residual term (see details in Appendix C) indicates that the approximation by the leading term Solving (36b) for p s rewrites it in the form of an upper bound for p s p s < 1 1 + | | (36c) (the use of " " sign is not appropriate in (36c), because this inequality does not imply a small ratio between its left-hand and right-hand parts). Note that the inequalities (36b), (36c) are not weaker than the formal upper bounds max in (16) and p s max in (15) which arise from the definition of (13) due to the requirement of positive probabilities. Approximation (35) is plotted in Figure 2 with red dashed lines along with corresponding upper bounds of approximation applicability range (36c) denoted by red dots (note that large violates (36a) anyway, thus in this case (36c) has no effect). Mutual information (35) scales with within range (36) as 2 and vanishes with → 0. The same holds for effective information (23). Since the normalizing denominator in (4b) contains one-time entropies which do not depend on at all, this scaling of Φ eff does not change the minimum information bipartition, finally implying that II also scales as 2 . That said, as factor 2 does not affect the sign of Φ eff , the lower bound s min 1 in (33) exists and is determined only by p s in this limit.
Substituting the approximation (35) for I 0 (·, ·) into the definition of g(s 1 ) in (30b) after simplifications reduces the equation g(s 1 ) = 0 to the following (see the comment below Equation (22)): whose solution in terms of s 1 on 0 < s 1 < 1 equals s min 1 , according to the reasoning behind Equation (33). Solving (37) as a quadratic equation in terms of √ s 1 produces a unique root on (0, 1), which yields Result of (38) is plotted in Figure 4 with red dashed lines: in panel (a) as a function of p s , and in panel (b) as horizontal lines whose vertical position is the result of (38), and horizontal span denotes the estimated applicability range (36b) (note that condition (36a) also applies, and becomes stronger than (36b) when p s < 1/2).

Comparison of Integrated Information Measures
In this Section we compare the outcome of two versions of empirical integrated information measures available in the literature, one being the "all-minus-sum" effective information Φ eff (3) from [13] which is used elsewhere in this study, and the other "decoder based" information Φ * as introduced in [16] and expressed by Equations (5a-c). We calculate both measures by their respective definitions using the one-and two-time probabilities from Equations (6a,b) and (11a-d) for the spiking-bursting model with N = 6 bits, assuming no spatial correlations among bits in spiking activity, with same spike probability P in each bit. In this case where m(x) is the number of ones in the binary word x.
We consider only a symmetric bipartition with subsystems A and B consisting of N/2 = 3 bits each. Due to the assumed equal spike probabilities in all bits and in the absence of spatial correlations of spiking, this implies complete equivalence between the subsystems. In particular, in the notations of Section 5 we get This choice of the bipartition is firstly due to the fact that the sign of effective information for this bipartition determines the sign of the resultant "whole minus sum" II (although the actual value of II is determined by the minimal information bipartition, which may be different). This has been established in Section 5 (see reasoning behind Equations (27)-(30) and further on); moreover, the function g(s 1 ) introduced in Equation (30b) expresses effective information for this particular bipartition thus the analysis of effective information sign in Section 5 applies to this symmetric bipartition. Moreover, the choice of the symmetric bipartition is consistent with available comparative studies of II measures [18], where it was substantiated by the conceptual requirement that highly asymmetric partitions should be excluded [2], and by the lack of a generally accepted specification of minimum information bipartition; for further discussion, see [18].
We have studied the dependence of the mentioned effective information measures Φ eff and Φ * upon spiking activity, which is controlled by s 1 , at different fixed values of the parameters p s and characterizing the bursting component. Typical dependence of Φ eff and Φ * upon s 1 , taken at p s = 0.6 with several values of , is shown in Figure 5, panel (a).
The behavior of the "whole minus sum" effective information Φ eff (41) (blue lines in Figure 5) is found to agree with the analytical findings of Section 5: • Φ eff transitions from negative to positive values at a certain threshold value of s 1 = s min 1 , which is well approximated by the formula (38) when is small, as required by (36a,b); the result of Equation (38) is indicated in each panel of Figure 5 by an additional vertical grid line labeled s min 1 on the abscissae axis-cf. Figure 4; • Φ eff reaches a maximum on the interval s min 1 < s 1 < 1 and tends to zero (from above) at s 1 → 1; • Φ eff scales with as 2 , when (36a,b) hold.
To verify the scaling observation, we plot the scaled values of both information measures Φ eff / 2 , Φ * / 2 in the panels (b)-(d) of Figure 5 for several fixed values of p s and . Expectedly, the scaling fails at p s = 0.7, = 0.4 in panel (d), as (36b) is not fulfilled in this case. Furthermore, the "decoder based" information Φ * (plotted with red lines in Figure 5) behaves mostly the same way, apart from being always non-negative (which was one of key motivations for introducing this measure in [16]). At the same time, the sign transition point s min 1 of the "whole minus sum" information associates with a rapid growth of the "decoder based" information. When s 1 is increased towards 1, the two measures converge. Remarkably, the scaling as 2 is found to be shared by both effective information measures.

Discussion
In general, the spiking-bursting model is completely specified by the combination of a full single-time probability table s x (consisting of 2 N probabilities of all possible outcomes, where N is the number of bits) for the time-uncorrelated spontaneous activity, along with two independent parameters (e.g., p s and ) for the dichotomous component. This combination is, however, redundant in that it admits a one-parameter scaling (12) which leaves the resultant stochastic process invariant.
Condition (30) was derived assuming that spiking activity in individual bits (i.e., nodes, or neurons) constituting the system is independent among the bits, which implies that the probability table s x is fully determined by N spike probabilities for individual nodes. The condition is formulated in terms of p s , and a single parameter s 1 (system-wide spike probability) for the spontaneous activity, agnostic of the "internal structure" of the system, i.e., the spike probabilities for individual nodes. This condition provides that the "whole minus sum" effective information is positive for any bipartition, regardless of the mentioned internal structure. Moreover, in the limit (36) of weak correlations in time, the inequality (30a) can be explicitly solved in terms of s 1 , producing the solution (33), (38).
In this way, the inequality (33) together with the asymptotic estimate (38) supplemented by its applicability range (36) specifies the region in the parameter space of the system, where the "whole minus sum" II is positive regardless of the internal system structure (sufficient condition). The internal structure (though still without spike correlations across the system) is taken into account by the necessary and sufficient condition (27) for positive II.
The mentioned conditions were derived under the assumption of absent correlation between spontaneous activity in individual bits (24). If correlation exists and is positive, then s 1 > s A s B , or s B < s 1 /s A . Then comparing the expressions for Φ eff (23) (general case) to (25) (space-uncorrelated case), and taking into account that I 0 (p s ) is an increasing function, we find Φ eff < f (s A ), cf. (25a). This implies that any necessary condition for positive II remains as such. Likewise, in the case of negative correlations we get Φ eff > f (s A ), implying that a sufficient condition remains as such.

Conclusions
The present study substantiates, refines and quantifies qualitative observations in regard to II in the spiking-bursting model which were initially made in [19]. The existence of lower bounds in spiking activity (characterized by s 1 ) required for positive "whole minus sum" II which was noticed in [19] is now expressed in the form of an explicit inequality (33) with the estimate (38) for the bound s min 1 . The observation of [19] that typically s min 1 is mostly determined by burst probability and weakly depends upon time correlations of bursts also becomes supported by the quantitative result (33), (38). In particular, there is a range of spiking activity intensity s 1 0.17, where the "whole minus sum" information is positive regardless of other system parameters, provided the spiking activity is spatially uncorrelated or negatively correlated across the system. When the burst probability is decreased (which implies less frequent activation of the astrocyte subsystem), the threshold value for spiking activity s min 1 also decreases. We found that II scales as 2 , where is proportional (as per Equation (17)) to the Pearson's time delayed correlation coefficient of the bursting component (which essentially characterizes the duration of bursts), for small (namely, within (36)), when other parameters (i.e., p s and spiking probability table s x ) are fixed. For the "whole minus sum" information, this is an analytical result. Note that the reasoning behind this result does not rely upon the assumption of spatial non-correlation of spiking activity (between bits), and thus applies generally to arbitrary spiking-bursting systems. According to a numerical calculation, this scaling approximately holds for the "decoder based" information as well.
Remarkably, II can not exceed the time delayed mutual information for the system as a whole, which in case of the spiking-bursting model in its present formulation is no greater than 1 bit.
The model provides a basis for possible modifications in order to apply integrated information concepts to systems exhibiting similar, but more complicated behavior (in particular, to neuronal [26][27][28][29] and neuron-astrocyte [24,30] networks). Such modifications might incorporate non-trivial spatial patterns in bursting, and causal interactions within and between the spiking and bursting subsystems.
The model can also be of interest as a new discrete-state test bench for different formalizations of integrated information, while available comparative studies of II measures mainly focus on Gaussian autoregressive models [17,18].

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Derivation of Parameters Scaling of the Spiking-Bursting Model
In order to formalize the reasoning in Section 4, we introduce an auxiliary 3-state process W with set of one-time states {s , d, b}, where s and b are always interpreted as spiking and bursting states in terms of Section 3, and d is another state, which is assumed to produce all bits equal 1 like in a burst, but in a time-uncorrelated manner (which is formalized by Equation (A4) below) like in a system-wide spike. When W is properly defined (by specifying all necessary probabilities, see below) and supplemented with a time-uncorrelated process S as a source of spontaneous activity for the state s , these together constitute a completely defined stochastic model {W, S}.
This 3-state based model may be mapped on equivalent (in terms of resultant realizations) 2-state based models as in Section 3 in an ambiguous way, because the state d may be equally interpreted either as a system-wide spike, or as a time-uncorrelated burst, thus producing two different dichotomous processes (which we denote as V and V ) for the equivalent spiking-bursting models. The relationship between the states of W, V and V is illustrated by the following diagram.
As soon as d-states of W are interpreted in V as (spiking) s-states, the spontaneous activity process S accompanying V has to be supplemented with system-wide spikes whenever W = d, in addition to the spontaneous activity process S for V . In order to maintain the absence of time correlations in spontaneous activity (which is essential for the analysis in Section 5), we assume time-uncorrelated choice between W = s and W = d when V = s (which manifests below in Equation (A4)). Then the difference between the spontaneous components S and S comes down to a difference in the corresponding one-time probability tables s x and s x .
In the following, we proceed from the dichotomous process V defined as in Section 3, then define a consistent 3-state process W, and further obtain another dichotomous process V for an equivalent model. Finally, we establish the relation between the corresponding probability tables of spontaneous activity s x and s x .
The first dichotomous process V has states denoted by {s, b} and is related to W according to the rule V = s when W = s or W = d, and V = b whenever W = b (see diagram (A1)). Assume fixed conditional probabilities which implies one-time probabilities for W as The mentioned requirement of time-uncorrelated choice between W = s and W = d when V = s is expressed by factorized two-time conditional probabilities p(W = s s | V = ss) = α 2 , (A4a) p(W = dd | V = ss) = β 2 .
Equations (A8), (A9) and (A11) fully describe the transformation of the spiking-bursting model which keeps the resultant stochastic process invariant by the construction of the transform. Taking into account that the dichotomous process is fully described by just two independent quantities, e.g., p s and p ss , all other probabilities being expressed in terms of these due to normalization and stationarity, the full invariant transformation is uniquely identified by a combination of (A11a,b), (A8a) and (A9a), which together constitute the scaling (12).

Appendix B. Expressing Mutual Information for the Spiking-Bursting Process
One-time entropy H x for the spiking-bursting process is expressed by (2) with probabilities p(x) taken from (6): where the additional terms besides the sum over x account for the specific expression (6b) for p(x = 1). Using the relation {ab} ≡ a{b} + {a}b, which is derived directly from (19), and collecting similar terms, we arrive at where H s is the entropy of the spiking component taken alone Two-time entropy is expressed similarly, by substituting probabilities p(xy) from (11) into the definition of entropy and taking into account the special cases with x = 1 and/or y = 1: where we used the reasoning that ∑ xy {s x s y } is the two-time entropy of the spiking component taken alone, which is (due to the postulated absence of time correlations in it) twice the one-time entropy H s (this of course can equally be found by direct calculation). Similarly, we get  Substituting (A17a-c) into (A16), using (A13) where applicable, and collecting similar terms with the relation p ss + π − p ss s 1 ≡ p s (A18) taken into account, we arrive at Finally, the expression (18) for mutual information is obtained by inserting (A14) and (A19) into the definition (1), with stationarity H y = H x taken into account.
where the notation max from (16) is used. We note that when max < 1, the condition (A32c) is the strongest among (A32a-c); when max > 1, the condition (A32a) is the strongest. Therefore, in both cases (A32b) can be dropped, thus producing (36).