1. Introduction
Integrated information theory (IIT, [
1]) attempts to explain consciousness by characterizing its essential phenomenal properties and then postulating that these phenomenal properties must be reflected operationally in properties of its physical substrate. IIT starts by noting that experience exists, and that every experience is intrinsic, specific, unitary, definite, and structured. IIT then translates these phenomenological axioms into physical postulates, where physical is understood operationally as taking and making a difference—i.e., as cause–effect power. Accordingly, IIT requires that a substrate exists (has cause–effect power) and that its cause–effect power is intrinsic, specific, irreducible, definite, and structured.
In IIT, the intrinsic, specific, unitary, and definite cause–effect power of a system is quantified by integrated information [
1,
2]. The definition of integrated information has been refined over time in an attempt to strengthen the mapping from essential phenomenal properties (axioms), to corresponding physical properties (postulates), and then to a mathematical framework for assessing the degree to which systems satisfy them [
1,
3,
4,
5,
6]. For an overview of IIT’s development and current formalism, see the IIT Wiki [
7]. Prior work introduced the requirement that units have a repertoire of alternative states so that they can take and make a difference [
4,
5]. The primary goal of the current work is to make explicit in the measure of system integrated information (
) the requirement that a system must provide itself with a repertoire of alternative states.
For a system to have intrinsic cause–effect power, it must satisfy two complementary requirements. First, it must provide itself with a repertoire of alternative cause–effect states. Second, it must specify one of the potential cause–effect states by increasing the probability of one particular cause–effect state relative to the alternatives. In
Section 2, we operationalize these requirements through quantities grounded in IIT’s intrinsic difference measure (ID) [
8]. We introduce a measure of
intrinsic differentiation, which is assessed as the difference between a system’s conditional distribution and a maximally specific distribution, quantifying the degree to which alternative states are intrinsically available.
Intrinsic specification is assessed as the difference between the conditional distribution and maximal differentiation, quantifying the degree to which the system specifies a specific state [
1]. These quantities are incorporated into the mathematical framework, leading to an updated account of integrated information
. Altogether, this framework clarifies the operational requirements for existence within IIT: a system with
must both provide itself with a repertoire of alternatives and specify one of them, and it must do so as a unified whole. These dual requirements sharpen the definition of intrinsic information and integrated information, with important implications for identifying and characterizing substrates of consciousness.
In
Section 3, we illustrate the behavior of the refined measures in simple examples, beginning with the case of an isolated binary unit. We then extend the analysis to small systems of micro units and finally to macro units that represent coarse-grained substrates such as neural assemblies. The examples show how intrinsic differentiation grows with system size, how it interacts with intrinsic specification, and how their tradeoff provides a principled criterion for measuring intrinsic existence.
Finally, in
Section 4, we discuss implications of this updated measure, especially as it relates to the fundamental role of intrinsic differentiation, which can be thought of as intrinsic ‘indeterminism’. We consider sources of indeterminism at the macro level—including fundamental stochasticity at the micro level, mappings from micro grains to macro grains, and fluctuations in background conditions impinging on the system—that may contribute to a system’s intrinsic differentiation, and thus to its intrinsic cause–effect power.
2. Theory
A physical system can possess intrinsic cause–effect power only to the extent that it can provide itself with a repertoire of potential states. To illustrate this point, consider the case of a single-unit system where the unit’s state transitions implement a deterministic COPY logic (i.e., if the current state is , both the past and future states are also with probability 1). From the extrinsic perspective of an experimenter, we can set the system in state and observe its behavior, then set the system into state and observe its behavior again, repeat this many times, and finally conclude that it implements COPY logic. However, for the system itself, left to its own devices, there is only , with no possible alternatives. With no alternatives, the system cannot ‘make a difference’ to itself, because from its intrinsic perspective, there is only the one option—there is no difference to be made.
This section formalizes the above intuition by (
i) revising the notion of intrinsic information from [
1] so that it now considers the extent to which a system provides itself with a repertoire of potential states, and (
ii) carrying this revision through to the definition of system integrated information. The following uses notation and conventions from [
1], and further details about the formalism, including aspects not revised in the current work, can be found there.
2.1. Mathematical Preliminaries
Throughout, we assume a discrete-time finite-state dynamical system governed by a transition-probability matrix (TPM) and adopt the causal-intervention semantics of the
operator [
9]. Let
be a set of
n binary units whose current state is
with
. The TPM for
U is
where
are any two system states.
Generally, we are interested in subsystems in a current state . Set operations on states, such as , are imprecise expressions, as s and u are not technically sets, and this should instead be written as . However, hereafter we abuse notation and use the former expression, as it simplifies the exposition.
The complementary set
in state
is considered the background conditions for the purpose of evaluating the intrinsic cause–effect power of
S. Before assessing the intrinsic cause–effect power of
S, the background conditions are accounted for by causal marginalization. At the micro grain, this process involves causally marginalizing the background units, conditional on the current state of
U [
1]. The causal marginalization process in IIT differs slightly from the usual marginalization in probability theory. The marginalization is performed for each unit separately, and they are then recombined using a product, to ensure that the resulting system TPM still has the conditional independence property. Letting
be the conditional distribution of
w given the current state
u (for causes, we are interested in the conditional distribution of the past state of
W and for effects the current state of
W; see [
1]), the corresponding TPM is
When the elements of
S are macro units (grouped over multiple units and/or multiple updates), Equation (
2) is generalized by (1) discounting micro connections extrinsic to
S; (2) extending the modified micro TPM to sequences of updates; (3) causally marginalizing the background; and (4) compressing the resulting sequence probabilities into macro-state probabilities (see Section 2.2 of [
6]).
The cause and effect TPMs (
) form the basis of the intrinsic cause–effect power of
S. Quantifying the cause–effect power of the system is done using probability distributions extracted from the TPMs, and measuring the difference made to those distributions by some intervention. For example, the difference between intact and partitioned probability distributions is used to quantify integrated information. The ID is a unique measure of the difference between two probability distributions that satisfies a set of properties motivated by the postulates of IIT [
8]. Given two probability distributions,
and
, the intrinsic difference is
It is worth noting that the ID is not a metric (it is asymmetric between
p and
q, similar to the Kullback–Leibler divergence). Moreover, the ID can be decomposed into two terms:
is the selectivity, which describes how likely the state
s is to occur under
p, while
is the informativeness, which describes how much power
p has to bring about
s relative to
q.
2.2. Intrinsic Differentiation
The novel measure we introduce here is
intrinsic differentiation, which quantifies the degree to which a system provides itself with a repertoire of potential states. The measure should be similar to entropy, in that it is zero when there is a perfectly deterministic system, and it should increase with decreasing determinism. Moreover, the units of intrinsic differentiation should be the same as those of intrinsic information and integrated information, i.e.,
ibits [
1]). With these considerations in mind, we define the intrinsic effect differentiation for a current state
s and an effect state
as the ID between a maximally specific (deterministic) probability distribution for
, and the conditional probability distribution of the effect state given the current state,
where
The intrinsic cause differentiation for a current state
s and a cause state
is defined analogously, as the ID between a deterministic probability distribution for
and the conditional distribution of the cause state given the current state,
where
is the conditional probability of the cause state
given the current state
s, and is computed from
using Bayes rule (see Equation (
11) below).
It is important to distinguish intrinsic differentiation from indeterminism
per se. Intrinsic differentiation is an operational requirement on the
availability of alternative cause–effect states: a system must provide itself with a nontrivial repertoire of alternatives in order to make a difference to itself. Indeterminism (i.e., probabilistic transitions) is one mechanism that can generate such alternatives, but it is not the only one. Differentiation can also arise through the system description and grain; for example, at macro scales, coarse-graining and the degeneracy of micro-state configurations that realize the same macro state can endow a macro unit with a rich repertoire of alternatives even when the underlying micro dynamics are nearly deterministic. We return to this point in
Section 4.
It is also worth clarifying the relationship between intrinsic differentiation and entropy. Because intrinsic differentiation for a given cause or effect state takes the form , it is closely related to surprisal. However, intrinsic differentiation is not an entropy: entropy is the expected surprisal—an average over a distribution—whereas intrinsic differentiation is a state-conditioned quantity, evaluated for a particular cause or effect state given the system’s current state. Thus, intrinsic differentiation is not a global measure of uncertainty but an intrinsic, state-specific measure of how alternatives are available to the system.
2.3. Intrinsic Information
Earlier formulations quantified intrinsic information only by how much the current state specified a cause–effect state, ignoring whether the system provided itself with a repertoire of alternative states [
1]. Perfectly deterministic systems could therefore achieve high intrinsic information despite having no intrinsically defined alternatives. The revised definition expands intrinsic information into two complementary components: intrinsic differentiation (
), which quantifies how a system intrinsically defines its own alternatives, and intrinsic specification (
), which quantifies how it intrinsically specifies one of those alternatives.
Noting that the intrinsic information defined in [
1] only captures the latter of the two key aspects of intrinsic cause–effect power, here, that quantity is renamed as the intrinsic specification. For a system
S in state
, the intrinsic specification of
s about an effect state
is defined as the ID between the conditional distribution of effect state given the current state (
) and the unconditional distribution of the effect state
,
where
The intrinsic specification of
s about a cause state
is defined analogously, as a product of a selectivity term and an informativeness term,
where
and
Here, the
term comes directly from
; it is the conditional probability distribution of the current state of
S given a cause state
, and
is the unconditional probability of the current state
s. The additional term
plays the role of selectivity, quantifying the likelihood that the current state was preceded by the cause state.
Intrinsic specification quantifies how a system specifies its cause and effect. By the information postulate, the cause and effect should be specific. To specify a state, we appeal to the principle of maximal existence, which states that when it comes to a requirement for existence, what exists is what exists the most [
1]. In other words, the system specifies the cause and effect states that maximize its specific cause–effect power,
Intrinsic differentiation and intrinsic specification capture the two requirements for cause–effect power that is intrinsic and specific: how a system intrinsically provides its own alternatives (
), and how a system specifies one of those alternatives (
). In IIT, the complementary principle to the principle of maximal existence is the principle of minimal existence, which states that when it comes to a requirement for existence, nothing exists more than the least it exists [
1]. Accordingly, we define the intrinsic cause and effect information as the minimum between intrinsic differentiation and intrinsic specification (
Figure 1),
Likewise, both intrinsic cause and effect information are requirements for existence, and thus we define the overall intrinsic information of the system as the minimum between cause and effect,
2.4. Integrated Information
A system that provides its own repertoire of potential states—and specifies one of those states—has intrinsic, specific, cause–effect power. Furthermore, if the system specifies its state as a unified whole, then it has intrinsic, specific, and irreducible cause–effect power. Integrated information quantifies how a system specifies its state as a whole, relative to how it specifies its state when cut into independent parts. The following is the same as the IIT 4.0 definition of
from [
1], until Equation (
23), when intrinsic differentiation is incorporated into the measure.
For a single-unit system (a ‘monad’), the system is, by definition, a whole that is irreducible to parts. In this case, all the intrinsic information specified by the system is considered to be integrated information, and
For systems with more than one unit, there are multiple ways to partition a system. For IIT, a directed partition is defined as a partition of the system such that each part has either its inputs cut, its outputs cut, or both. A directed partition has the form
where
is a partition of
S and each
, respectively indicating whether its inputs, outputs, or both are cut, and
is the set of all directed partitions. For a given partition
and a part
, we can define the set of inputs
to
that are cut by
,
and the complementary set of inputs left intact,
.
For a given partition
, we can compute a partitioned cause and effect TPM that describes the transition probabilities of the system after it has been partitioned, and cut connections are replaced with a uniform repertoire over alternative states,
where
is the partitioned distribution of unit
,
The integrated effect information of a system
over a partition
is the ID between the intact and partitioned repertoires, but evaluated specifically for the effect state identified by intrinsic specification
:
The integrated cause information over a partition
is defined analogously, but again, using
for the selectivity term:
However, there are many ways to partition a system. Again, following the principle of minimal existence, we define the integrated information of the system as the integrated information over the system’s minimum partition. The minimum partition is that which minimizes the integrated information after normalizing by the number of possible edges in the causal graph that span the parts (the maximum possible integrated information for a partition),
and then
Moreover, both integrated cause information and integrated effect information are conditions for existence, as well as intrinsic differentiation and intrinsic specification. Again, by the principle of minimal existence, the integrated information of the system is the minimum among these quantities,
Altogether, quantifies the intrinsic, specific, and irreducible cause–effect power of the system S in state s. It accounts for three requirements of existence: that a system provides itself with a repertoire of alternative states (intrinsic differentiation), that it specifies one of its potential alternatives (intrinsic information), and that it does so as a whole, irreducible to independent parts (integrated information).
3. Worked Examples
In
Section 2, we formalized the idea that intrinsic cause–effect power requires a system to provide itself with a repertoire of potential states. Intrinsic differentiation was introduced to quantify the extent to which a system provides itself with a repertoire of alternative states. The intrinsic differentiation measure is folded into the existing mathematical framework of IIT to define an updated version of system integrated information (
) that accounts for a system’s ability to provide itself with a repertoire of possible states.
In what follows, we present examples exploring how the updated account of system integrated information interacts with previously explored properties of the mathematical framework. Specifically, we consider (
i) the
value for a single-unit system (a ‘monad’); (
ii) how the update impacts the search for subsystems that maximize
(complexes); and (
iii) how the update influences whether cause–effect power peaks at macro grains. Calculations were performed with the PyPhi toolbox for IIT [
10].
3.1. Example 1: Monads
We use the term ‘monad’ in the minimal sense of a single, indivisible unit. This echoes Leibniz’s monadology [
11], where monads are simple, partless entities whose properties are defined intrinsically (i.e., not by external relations), though our usage is purely operational within the IIT framework. Monads, by definition, are integrated wholes that cannot be cut into parts. As such, all intrinsic information specified by a monad is considered to be integrated information. The intrinsic specification of a monad is maximized when the monad is deterministic (the past and future states of the monad are fully determined by the current state of the monad); however, in that case the intrinsic differentiation is zero. By contrast, the intrinsic differentiation of the monad is maximized when, given the current state of the monad, the potential past and future states are equally likely, but in this case the intrinsic specification is zero. Since intrinsic information (and thus also the integrated information of a monad) is the minimum between intrinsic differentiation and intrinsic specification, it follows that the
of the monad will be maximized for some intermediate level of determinism.
Consider the monad shown in
Figure 2A with the corresponding transition probability matrix shown in
Figure 2B. The function of the monad is an imperfect COPY gate: the monad remains in its current state with probability
p, and switches states with probability
, for some
(the symmetric case
corresponds to an imperfect NOT gate). For this simple system, the cause and effect results are identical. For the system in the ON (
) state (though the results are identical for the OFF (
) state), the intrinsic differentiation of the system is
and the intrinsic specification is
The specified cause and effect states for the system are
since
. The resulting intrinsic information, and thus integrated information, is
The first term,
, is 0 at
and increasing on
, while the second term,
, is 1 at
and decreasing on
. Thus, the maximum value of
occurs at the intersection of the two curves. Numerically solving for their intersection shows that the maximum value of
occurs at
(
Figure 2C).
3.2. Example 2: Complexes
A system with greater than all overlapping systems is called a complex, and according to IIT, such a system is a physical substrate of consciousness. It is therefore important to examine how the dual requirements for intrinsic differentiation and intrinsic specification affect the growth of as system size increases. The previous example illustrated how the introduction of intrinsic differentiation results in a tension between indeterminism and determinism. The goal of this example is to understand how, if at all, this tension influences the potential for systems with a large number of units to be a maximum of .
Consider a system of
n units
S in a state
s such that its future state will be
s with probability
p and the remaining probability mass is spread uniformly across states, i.e., for all
we have
It follows that the unconstrained effect repertoire is uniform:
For such a system, the specified effect state is
, and the corresponding intrinsic differentiation and specification are
and
The value of
p that maximizes the intrinsic effect information (
, the intersection of Equations (
31) and (
30)) decreases as the system size increases (
Figure 3A), resulting in increased intrinsic differentiation. Additionally, the log ratio between
and the
increases faster than
decreases (
Figure 3B), resulting in increased intrinsic specification. Finally, the ratio between
and
is increasing with system size (
Figure 3C), despite the decrease in
in absolute terms. This indicates that, for this class of systems, intrinsic information is maximized in the large-
n limit when the system specifies one effect state with probability
p exceeding that of all alternatives.
To further explore the above theoretical argument, we consider an example system of
units (
Figure 3D) that was originally presented in [
1] (Figure 6D in the referenced paper). The example includes a determinism (temperature) parameter
which results in the full 6-unit system being identified as the complex. In the current work, we analyzed this system using the updated account of
and examined the behavior of the system as
K is varied. The size of the largest complex varies with
K, with the full 6-unit system being a complex for
and the system breaking down into two-unit complexes for
. It is worth noting that this is not due to the introduction of intrinsic differentiation, as the intrinsic differentiation only affects
when
(
Figure 3F).
3.3. Example 3: Intrinsic Units
The final example concerns IIT’s requirement that a complex is the set of units that maximizes
across all subsets, and across all grains. The framework for assessing cause–effect power across grains is described in [
6]. A key aspect of the framework is that potential macro units must satisfy the postulates by being ‘maximally irreducible within’, which essentially means that when treated as a subsystem they have greater
than any potential subsystems that can be defined from within (but not necessarily any supersets or partially overlapping sets). In this example, we revisit the question of whether cause–effect power can peak at macro grains with the updated account of
, and whether intrinsic differentiation plays a role in determining if a unit satisfies the maximally irreducible within condition, or if a system of macro units has greater
than the corresponding micro units.
For this example, we recreate a minimal example from [
6] (Figure 4 in the cited paper). The example starts with a two-unit system of micro units
that each approximate the function of an imperfect AND gate (
Figure 4A). When both
A and
B are in the OFF state
, then they will be ON in the future state with probability
p. When
A is ON but
B is OFF, the probability that
A’s future state is ON remains
p, but the probability that
B’s future state is ON is marginally increased to
(and vice versa when
A is OFF but
B is ON). When both
A and
B are ON, their future states will be ON with probability
.
In [
6], the example is presented using
. In this case,
(
Figure 4B) satisfies the requirement of maximally irreducible within. When considered as a macro system
, it has greater cause–effect power than the corresponding micro system
. In the current work, we reanalyze the system with the inclusion of intrinsic differentiation and compute
for
. The potential macro unit
satisfies the maximally irreducible within criteria for all
, and, moreover, the macro monad has greater
than the corresponding micro system for
(
Figure 4C).
Thus, with the introduction of intrinsic differentiation, it is still possible to define intrinsic macro units, and those units may have greater cause–effect power than the corresponding micro systems. However, the outcome depends on the level of determinism in the system, and it may be the case that the intrinsic differentiation at the macro level prevents the macro system from outperforming the micro system.
4. Discussion
In this work, we refined the operational account of intrinsic cause–effect power within IIT by introducing the complementary concepts of intrinsic differentiation and intrinsic specification. We argued that a system must both provide itself with a repertoire of alternatives and specify one among them in order to have intrinsic existence. Building on these requirements, we defined intrinsic information as the minimum of differentiation and specification, and incorporated this measure into the calculation of . Through worked examples ranging from single-unit systems (monads) to larger complexes and macro-level units, we illustrated how the revised framework clarifies the role of indeterminism in IIT, and its impact on identifying complexes. Together, these results present a more complete account of the conditions under which systems can be said to possess intrinsic cause–effect power.
The central motivation for this refinement is conceptual: to ensure that the mathematical formulation of intrinsic cause–effect power aligns with IIT’s postulates. Among these, the postulate of intrinsicality is primary: existence (intrinsic cause–effect power) must be defined from the perspective of the system itself, not relative to an outside observer. By explicitly requiring both differentiation (the system provides itself with a repertoire of alternative states) and specification (the system specifies one among those alternatives), the revised framework secures a closer link between the formal measures and the axioms they are intended to capture. In this way, intrinsic information captures precisely the two requirements that a system must satisfy to exist intrinsically.
The present work builds on a broader program to formalize measures of causation. Effective information (EI) [
12] quantifies a system’s causal structure as an aggregate measure averaged over all states; the actual causation framework [
13] addresses a complementary question: what caused a specific occurrence? Both share the interventionist semantics employed here but differ in aim: EI characterizes overall causal structure, actual causation traces specific transitions, and the present measures assess whether a system has intrinsic cause–effect power, the requirement for intrinsic existence.
A key consequence of introducing intrinsic differentiation is that it renders explicit a tradeoff between determinism and indeterminism. Intrinsic differentiation essentially captures indeterminism in the system, however without the standard interpretation: rather than something extra (‘noise’) added to the system, the indeterminism is intrinsic to the system and a requirement for its existence. Purely deterministic systems provide no genuine alternatives, and thus their intrinsic differentiation is zero, while purely random systems specify no state, leaving intrinsic specification at zero. Only systems that balance these two extremes can have nontrivial intrinsic information. This principle was clearly illustrated in the monad example, where integrated information peaked at an intermediate p, and it carried through to larger systems: as system size increased, differentiation naturally grew but had to be contrasted with sufficient specification. Similarly, in the analysis of macro systems, the extent to which coarse-grained units outperformed their micro constituents depended on whether an appropriate balance between determinism and indeterminism was maintained.
The requirement of intrinsic differentiation holds regardless of the spatiotemporal grain of the system. Quantum mechanics provides a well-established paradigm for micro-grain differentiation, most clearly in the context of collapse models that posit a wave function supporting potential alternative states [
14]. The interpretation of these models, and the precise ontological status of quantum indeterminism, remains subject to debate, with some accounts aligned with the current work (e.g., promoting indeterminism as an intrinsic feature of physical reality [
15]).
At the macro grain, differentiation does not simply disappear once it is present at the micro grain, but is instead reshaped by the process of mapping from micro to macro. Some of this differentiation is a direct extension of the differentiation of the underlying micro grain, inherited when micro states are grouped into a macro state. Yet, because cause–effect power is assessed from the intrinsic perspective of the macro grain, additional sources of differentiation arise. One is the percolation of background conditions across macro updates, where fluctuations external to the macro unit mediate cause–effect power. Another stems from uncertainty about the precise initial configuration and dynamic evolution of micro units constituting a macro unit, since many micro patterns may correspond to the same macro outcome. Together, these factors mean that macro-level systems often display more intrinsic differentiation than their micro counterparts.
The requirement of some degree of differentiation (i.e., indeterminism) also resonates with longstanding ideas about criticality and metastability in brain dynamics [
16]. Neural systems operate near the edge of chaos, where activity is neither frozen into rigid determinism nor dissolved into unstructured randomness [
17]. The present framework emphasizes that differentiation is not a nuisance to be minimized, but a necessary ingredient for intrinsic existence. A balance between differentiation and specification (i.e., indeterminism and determinism) provides the conditions under which neural substrates can maximize their intrinsic cause–effect power, revealing a further alignment of the principles of IIT with accounts of criticality in complex biological systems [
18]. The requirement for a repertoire of alternatives also resonates with the notion of empowerment [
19], which quantifies the channel capacity between an agent’s actions and its future sensor states. However, empowerment is an extrinsic, observer-defined quantity, whereas intrinsic differentiation is assessed from the system’s own perspective; the two thus capture related but distinct aspects of a system’s causal structure (for a contrast between intrinsic and extrinsic notions of information, see [
20]). More broadly, comparison with other unifying frameworks, such as the free energy principle and active inference, remains an important direction for future work.
The balance between differentiation and specification also relates to agency and control: intrinsic differentiation corresponds to the availability of alternatives, while intrinsic specification corresponds to the ability to select a particular outcome. In IIT’s intrinsic-powers view, this is the relevant sense in which ‘true alternatives’ and ‘true decisions’, and thus free will in an intrinsic, operational sense, can be understood in terms of the causal power of an entity [
21].