Next Article in Journal
Reconstruction of Multiplex Networks with Correlated Layers
Previous Article in Journal
Step-Wise Dual Dynamic DPSGD: Enhancing Performance on Imbalanced Medical Datasets with Differential Privacy
Previous Article in Special Issue
Reply to Rourk, C. Comment on “Albantakis et al. Computing the Integrated Information of a Quantum Mechanism. Entropy 2023, 25, 449”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intrinsic Cause–Effect Power: The Tradeoff Between Differentiation and Specification

by
William G. P. Mayner
1,*,†,
William Marshall
2,*,† and
Giulio Tononi
1,*
1
Department of Psychiatry, University of Wisconsin–Madison, Madison, WI 53719, USA
2
Department of Mathematics & Statistics, Brock University, St. Catharines, ON L2S 3A1, Canada
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2026, 28(4), 410; https://doi.org/10.3390/e28040410
Submission received: 28 December 2025 / Revised: 24 March 2026 / Accepted: 2 April 2026 / Published: 4 April 2026
(This article belongs to the Collection Advances in Integrated Information Theory)

Abstract

Integrated information theory (IIT) starts from the existence of consciousness and characterizes its essential properties: every experience is intrinsic, specific, unitary, definite, and structured. IIT then formulates existence and its essential properties operationally in terms of cause–effect power of a substrate of units. Here, we address IIT’s operational requirements for existence by considering that, to have cause–effect power, to have it intrinsically, and to have it specifically, substrate units in their actual state must both (i) ensure the intrinsic availability of a repertoire of cause–effect states, and (ii) increase the probability of a specific cause–effect state. We showed previously that requirement (ii) can be assessed by the intrinsic difference of a state’s probability from maximal differentiation. Here, we show that requirement (i) can be assessed by the intrinsic difference from maximal specification. These points and their consequences for integrated information are illustrated using simple systems of micro units. When applied to macro units and systems of macro units such as neural systems, a tradeoff between differentiation and specification is a necessary condition for intrinsic existence—and therefore, according to IIT, for consciousness.

1. Introduction

Integrated information theory (IIT, [1]) attempts to explain consciousness by characterizing its essential phenomenal properties and then postulating that these phenomenal properties must be reflected operationally in properties of its physical substrate. IIT starts by noting that experience exists, and that every experience is intrinsic, specific, unitary, definite, and structured. IIT then translates these phenomenological axioms into physical postulates, where physical is understood operationally as taking and making a difference—i.e., as cause–effect power. Accordingly, IIT requires that a substrate exists (has cause–effect power) and that its cause–effect power is intrinsic, specific, irreducible, definite, and structured.
In IIT, the intrinsic, specific, unitary, and definite cause–effect power of a system is quantified by integrated information [1,2]. The definition of integrated information has been refined over time in an attempt to strengthen the mapping from essential phenomenal properties (axioms), to corresponding physical properties (postulates), and then to a mathematical framework for assessing the degree to which systems satisfy them [1,3,4,5,6]. For an overview of IIT’s development and current formalism, see the IIT Wiki [7]. Prior work introduced the requirement that units have a repertoire of alternative states so that they can take and make a difference [4,5]. The primary goal of the current work is to make explicit in the measure of system integrated information ( φ s ) the requirement that a system must provide itself with a repertoire of alternative states.
For a system to have intrinsic cause–effect power, it must satisfy two complementary requirements. First, it must provide itself with a repertoire of alternative cause–effect states. Second, it must specify one of the potential cause–effect states by increasing the probability of one particular cause–effect state relative to the alternatives. In Section 2, we operationalize these requirements through quantities grounded in IIT’s intrinsic difference measure (ID) [8]. We introduce a measure of intrinsic differentiation, which is assessed as the difference between a system’s conditional distribution and a maximally specific distribution, quantifying the degree to which alternative states are intrinsically available. Intrinsic specification is assessed as the difference between the conditional distribution and maximal differentiation, quantifying the degree to which the system specifies a specific state [1]. These quantities are incorporated into the mathematical framework, leading to an updated account of integrated information φ s . Altogether, this framework clarifies the operational requirements for existence within IIT: a system with φ s > 0 must both provide itself with a repertoire of alternatives and specify one of them, and it must do so as a unified whole. These dual requirements sharpen the definition of intrinsic information and integrated information, with important implications for identifying and characterizing substrates of consciousness.
In Section 3, we illustrate the behavior of the refined measures in simple examples, beginning with the case of an isolated binary unit. We then extend the analysis to small systems of micro units and finally to macro units that represent coarse-grained substrates such as neural assemblies. The examples show how intrinsic differentiation grows with system size, how it interacts with intrinsic specification, and how their tradeoff provides a principled criterion for measuring intrinsic existence.
Finally, in Section 4, we discuss implications of this updated measure, especially as it relates to the fundamental role of intrinsic differentiation, which can be thought of as intrinsic ‘indeterminism’. We consider sources of indeterminism at the macro level—including fundamental stochasticity at the micro level, mappings from micro grains to macro grains, and fluctuations in background conditions impinging on the system—that may contribute to a system’s intrinsic differentiation, and thus to its intrinsic cause–effect power.

2. Theory

A physical system can possess intrinsic cause–effect power only to the extent that it can provide itself with a repertoire of potential states. To illustrate this point, consider the case of a single-unit system where the unit’s state transitions implement a deterministic COPY logic (i.e., if the current state is s = 1 , both the past and future states are also s = 1 with probability 1). From the extrinsic perspective of an experimenter, we can set the system in state s = 1 and observe its behavior, then set the system into state s = 0 and observe its behavior again, repeat this many times, and finally conclude that it implements COPY logic. However, for the system itself, left to its own devices, there is only s = 1 , with no possible alternatives. With no alternatives, the system cannot ‘make a difference’ to itself, because from its intrinsic perspective, there is only the one option—there is no difference to be made.
This section formalizes the above intuition by (i) revising the notion of intrinsic information from [1] so that it now considers the extent to which a system provides itself with a repertoire of potential states, and (ii) carrying this revision through to the definition of system integrated information. The following uses notation and conventions from [1], and further details about the formalism, including aspects not revised in the current work, can be found there.

2.1. Mathematical Preliminaries

Throughout, we assume a discrete-time finite-state dynamical system governed by a transition-probability matrix (TPM) and adopt the causal-intervention semantics of the do ( · )  operator [9]. Let U = { U 1 , , U n } be a set of n binary units whose current state is u Ω U with Ω U = { 0 , 1 } n . The TPM for U is
T U p ( u ¯ u ) = P U = u ¯ do ( U = u ) ,
where u , u ¯ Ω U are any two system states.
Generally, we are interested in subsystems S U in a current state s u Ω S . Set operations on states, such as s u , are imprecise expressions, as s and u are not technically sets, and this should instead be written as { S = s } { U = u } . However, hereafter we abuse notation and use the former expression, as it simplifies the exposition.
The complementary set W = U S in state w = u s is considered the background conditions for the purpose of evaluating the intrinsic cause–effect power of S. Before assessing the intrinsic cause–effect power of S, the background conditions are accounted for by causal marginalization. At the micro grain, this process involves causally marginalizing the background units, conditional on the current state of U [1]. The causal marginalization process in IIT differs slightly from the usual marginalization in probability theory. The marginalization is performed for each unit separately, and they are then recombined using a product, to ensure that the resulting system TPM still has the conditional independence property. Letting q c / e ( w u ) be the conditional distribution of w given the current state u (for causes, we are interested in the conditional distribution of the past state of W and for effects the current state of W; see [1]), the corresponding TPM is
T c / e p c / e ( s ¯ s ) = i = 1 | S | w ¯ p ( s ¯ i s , w ¯ ) q c / e ( w ¯ u ) , s , s ¯ Ω S , u Ω U .
When the elements of S are macro units (grouped over multiple units and/or multiple updates), Equation (2) is generalized by (1) discounting micro connections extrinsic to S; (2) extending the modified micro TPM to sequences of updates; (3) causally marginalizing the background; and (4) compressing the resulting sequence probabilities into macro-state probabilities (see Section 2.2 of [6]).
The cause and effect TPMs ( T c / e ) form the basis of the intrinsic cause–effect power of S. Quantifying the cause–effect power of the system is done using probability distributions extracted from the TPMs, and measuring the difference made to those distributions by some intervention. For example, the difference between intact and partitioned probability distributions is used to quantify integrated information. The ID is a unique measure of the difference between two probability distributions that satisfies a set of properties motivated by the postulates of IIT [8]. Given two probability distributions, p ( s ) and q ( s ) , the intrinsic difference is
I D ( p , q ) = max s p ( s ) log p ( s ) q ( s ) .
It is worth noting that the ID is not a metric (it is asymmetric between p and q, similar to the Kullback–Leibler divergence). Moreover, the ID can be decomposed into two terms: p ( s ) is the selectivity, which describes how likely the state s is to occur under p, while log p ( s ) / q ( s ) is the informativeness, which describes how much power p has to bring about s relative to q.

2.2. Intrinsic Differentiation

The novel measure we introduce here is intrinsic differentiation, which quantifies the degree to which a system provides itself with a repertoire of potential states. The measure should be similar to entropy, in that it is zero when there is a perfectly deterministic system, and it should increase with decreasing determinism. Moreover, the units of intrinsic differentiation should be the same as those of intrinsic information and integrated information, i.e., ibits [1]). With these considerations in mind, we define the intrinsic effect differentiation for a current state s and an effect state s as the ID between a maximally specific (deterministic) probability distribution for s , and the conditional probability distribution of the effect state given the current state,
i diff e ( s , s ) = ID ( p s ( s ¯ ) , p e ( s ¯ s ) ) = log ( p e ( s s ) ) ,
where
p s ( s ¯ ) = 1 if s ¯ = s 0 otherwise .
The intrinsic cause differentiation for a current state s and a cause state s is defined analogously, as the ID between a deterministic probability distribution for s and the conditional distribution of the cause state given the current state,
i diff c = ID ( p s ( s ¯ ) , p c ( s ¯ s ) ) = log ( p c ( s s ) ) ,
where p c ( s ¯ s ) is the conditional probability of the cause state s ¯ given the current state s, and is computed from p c ( s s ¯ ) using Bayes rule (see Equation (11) below).
It is important to distinguish intrinsic differentiation from indeterminism per se. Intrinsic differentiation is an operational requirement on the availability of alternative cause–effect states: a system must provide itself with a nontrivial repertoire of alternatives in order to make a difference to itself. Indeterminism (i.e., probabilistic transitions) is one mechanism that can generate such alternatives, but it is not the only one. Differentiation can also arise through the system description and grain; for example, at macro scales, coarse-graining and the degeneracy of micro-state configurations that realize the same macro state can endow a macro unit with a rich repertoire of alternatives even when the underlying micro dynamics are nearly deterministic. We return to this point in Section 4.
It is also worth clarifying the relationship between intrinsic differentiation and entropy. Because intrinsic differentiation for a given cause or effect state takes the form log p ( · s ) , it is closely related to surprisal. However, intrinsic differentiation is not an entropy: entropy is the expected surprisal—an average over a distribution—whereas intrinsic differentiation is a state-conditioned quantity, evaluated for a particular cause or effect state given the system’s current state. Thus, intrinsic differentiation is not a global measure of uncertainty but an intrinsic, state-specific measure of how alternatives are available to the system.

2.3. Intrinsic Information

Earlier formulations quantified intrinsic information only by how much the current state specified a cause–effect state, ignoring whether the system provided itself with a repertoire of alternative states [1]. Perfectly deterministic systems could therefore achieve high intrinsic information despite having no intrinsically defined alternatives. The revised definition expands intrinsic information into two complementary components: intrinsic differentiation ( i diff ), which quantifies how a system intrinsically defines its own alternatives, and intrinsic specification ( i spec ), which quantifies how it intrinsically specifies one of those alternatives.
Noting that the intrinsic information defined in [1] only captures the latter of the two key aspects of intrinsic cause–effect power, here, that quantity is renamed as the intrinsic specification. For a system S in state s Ω S , the intrinsic specification of s about an effect state s ¯ is defined as the ID between the conditional distribution of effect state given the current state ( p ( s ¯ s ) ) and the unconditional distribution of the effect state p e ( s ¯ ) ,
i spec e ( s , s ¯ ) = p e ( s ¯ s ) log p e ( s ¯ s ) p e ( s ¯ ) ,
where
p e ( s ¯ ) = 1 | Ω S | s ^ Ω S p e ( s ¯ s ^ ) .
The intrinsic specification of s about a cause state s ¯ is defined analogously, as a product of a selectivity term and an informativeness term,
i spec c ( s , s ¯ ) = p c ( s ¯ s ) log p c ( s s ¯ ) p c ( s ) ,
where
p c ( s ) = 1 | Ω S | s ^ Ω S p c ( s s ^ )
and
p c ( s ¯ s ) = p c ( s s ¯ ) s ^ Ω S p c ( s s ^ ) .
Here, the p c ( s s ¯ ) term comes directly from T c ; it is the conditional probability distribution of the current state of S given a cause state s ¯ , and p c ( s ) is the unconditional probability of the current state s. The additional term p c ( s ¯ s ) plays the role of selectivity, quantifying the likelihood that the current state was preceded by the cause state.
Intrinsic specification quantifies how a system specifies its cause and effect. By the information postulate, the cause and effect should be specific. To specify a state, we appeal to the principle of maximal existence, which states that when it comes to a requirement for existence, what exists is what exists the most [1]. In other words, the system specifies the cause and effect states that maximize its specific cause–effect power,
s c / e = argmax s ¯ Ω S i spec c / e ( s , s ¯ ) , s Ω S .
Intrinsic differentiation and intrinsic specification capture the two requirements for cause–effect power that is intrinsic and specific: how a system intrinsically provides its own alternatives ( i diff ), and how a system specifies one of those alternatives ( i spec ). In IIT, the complementary principle to the principle of maximal existence is the principle of minimal existence, which states that when it comes to a requirement for existence, nothing exists more than the least it exists [1]. Accordingly, we define the intrinsic cause and effect information as the minimum between intrinsic differentiation and intrinsic specification (Figure 1),
ii c / e ( s ) = min { i diff c / e ( s ) , i spec c / e ( s ) } .
Likewise, both intrinsic cause and effect information are requirements for existence, and thus we define the overall intrinsic information of the system as the minimum between cause and effect,
ii ( s ) = min { ii c ( s ) , ii e ( s ) } .

2.4. Integrated Information

A system that provides its own repertoire of potential states—and specifies one of those states—has intrinsic, specific, cause–effect power. Furthermore, if the system specifies its state as a unified whole, then it has intrinsic, specific, and irreducible cause–effect power. Integrated information quantifies how a system specifies its state as a whole, relative to how it specifies its state when cut into independent parts. The following is the same as the IIT 4.0 definition of φ s from [1], until Equation (23), when intrinsic differentiation is incorporated into the measure.
For a single-unit system (a ‘monad’), the system is, by definition, a whole that is irreducible to parts. In this case, all the intrinsic information specified by the system is considered to be integrated information, and
φ s ( s ) = ii ( s ) .
For systems with more than one unit, there are multiple ways to partition a system. For IIT, a directed partition is defined as a partition of the system such that each part has either its inputs cut, its outputs cut, or both. A directed partition has the form
θ = { S δ 1 ( 1 ) , , S δ k ( k ) } Θ ( S ) ,
where S ( 1 ) , , S ( k ) is a partition of S and each δ i { , , } , respectively indicating whether its inputs, outputs, or both are cut, and Θ ( S ) is the set of all directed partitions. For a given partition θ and a part S ( i ) θ , we can define the set of inputs X ( i ) S to S ( i ) that are cut by θ ,
X ( i ) = S S ( i ) if δ i { , } , j i δ j { , } S ( j ) if δ i { } ,
and the complementary set of inputs left intact, Y ( i ) = S X ( i ) .
For a given partition θ , we can compute a partitioned cause and effect TPM that describes the transition probabilities of the system after it has been partitioned, and cut connections are replaced with a uniform repertoire over alternative states,
T c / e θ p c / e θ ( s ¯ s ) = j = 1 n p c / e θ ( s ¯ j s ) , s , s ¯ Ω S ,
where p c / e θ ( s ¯ j s ) is the partitioned distribution of unit S j S ( i ) ,
p c / e θ ( s ¯ j s ) = Ω X ( i ) 1 x ( i ) Ω X ( i ) p c / e ( s ¯ j x ( i ) , y ( i ) ) .
The integrated effect information of a system S = s over a partition θ Θ ( S ) is the ID between the intact and partitioned repertoires, but evaluated specifically for the effect state identified by intrinsic specification s e :
φ e ( s , θ ) = p e ( s e s ) log p e ( s e s ) p e θ ( s e s ) .
The integrated cause information over a partition θ Θ ( S ) is defined analogously, but again, using p c for the selectivity term:
φ c ( s , θ ) = p c ( s c s ) log p c ( s s c ) p c θ ( s s c ) .
However, there are many ways to partition a system. Again, following the principle of minimal existence, we define the integrated information of the system as the integrated information over the system’s minimum partition. The minimum partition is that which minimizes the integrated information after normalizing by the number of possible edges in the causal graph that span the parts (the maximum possible integrated information for a partition),
θ = argmax θ Θ φ ( s , θ ) i = 1 k | S ( i ) | | X ( i ) |
and then
φ ( s ) = φ ( s , θ ) .
Moreover, both integrated cause information and integrated effect information are conditions for existence, as well as intrinsic differentiation and intrinsic specification. Again, by the principle of minimal existence, the integrated information of the system is the minimum among these quantities,
φ ( s ) = min { φ c ( s ) , φ e ( s ) , ii ( s ) } .
Altogether, φ s ( s ) quantifies the intrinsic, specific, and irreducible cause–effect power of the system S in state s. It accounts for three requirements of existence: that a system provides itself with a repertoire of alternative states (intrinsic differentiation), that it specifies one of its potential alternatives (intrinsic information), and that it does so as a whole, irreducible to independent parts (integrated information).

3. Worked Examples

In Section 2, we formalized the idea that intrinsic cause–effect power requires a system to provide itself with a repertoire of potential states. Intrinsic differentiation was introduced to quantify the extent to which a system provides itself with a repertoire of alternative states. The intrinsic differentiation measure is folded into the existing mathematical framework of IIT to define an updated version of system integrated information ( φ s ) that accounts for a system’s ability to provide itself with a repertoire of possible states.
In what follows, we present examples exploring how the updated account of system integrated information interacts with previously explored properties of the mathematical framework. Specifically, we consider (i) the φ s value for a single-unit system (a ‘monad’); (ii) how the update impacts the search for subsystems that maximize φ s (complexes); and (iii) how the update influences whether cause–effect power peaks at macro grains. Calculations were performed with the PyPhi toolbox for IIT [10].

3.1. Example 1: Monads

We use the term ‘monad’ in the minimal sense of a single, indivisible unit. This echoes Leibniz’s monadology [11], where monads are simple, partless entities whose properties are defined intrinsically (i.e., not by external relations), though our usage is purely operational within the IIT framework. Monads, by definition, are integrated wholes that cannot be cut into parts. As such, all intrinsic information specified by a monad is considered to be integrated information. The intrinsic specification of a monad is maximized when the monad is deterministic (the past and future states of the monad are fully determined by the current state of the monad); however, in that case the intrinsic differentiation is zero. By contrast, the intrinsic differentiation of the monad is maximized when, given the current state of the monad, the potential past and future states are equally likely, but in this case the intrinsic specification is zero. Since intrinsic information (and thus also the integrated information of a monad) is the minimum between intrinsic differentiation and intrinsic specification, it follows that the φ s of the monad will be maximized for some intermediate level of determinism.
Consider the monad shown in Figure 2A with the corresponding transition probability matrix shown in Figure 2B. The function of the monad is an imperfect COPY gate: the monad remains in its current state with probability p, and switches states with probability 1 p , for some p ( 0.5 , 1 ] (the symmetric case p [ 0 , 0.5 ) corresponds to an imperfect NOT gate). For this simple system, the cause and effect results are identical. For the system in the ON ( s = 1 ) state (though the results are identical for the OFF ( s = 0 ) state), the intrinsic differentiation of the system is
i diff c / e ( s , s ¯ ) = log ( p ) if s ¯ = 0 log ( 1 p ) if s ¯ = 1 ,
and the intrinsic specification is
i spec c / e ( s , s ¯ ) = p log ( 2 p ) if s ¯ = 0 ( 1 p ) log ( 2 ( 1 p ) ) if s ¯ = 1 .
The specified cause and effect states for the system are
s c / e = argmax s ¯ i spec c / e ( s , s ¯ ) = 0 ,
since p > 0.5 . The resulting intrinsic information, and thus integrated information, is
φ s ( s ) = ii ( s ) = min p log ( 2 p ) , log ( p )
The first term, p log ( 2 p ) , is 0 at p = 0.5 and increasing on p ( 0.5 , 1 ] , while the second term, log ( p ) , is 1 at p = 0.5 and decreasing on p ( 0.5 , 1 ) . Thus, the maximum value of φ s occurs at the intersection of the two curves. Numerically solving for their intersection shows that the maximum value of φ s ( s ) = 0.427 occurs at p = 0.744 (Figure 2C).

3.2. Example 2: Complexes

A system with greater φ s than all overlapping systems is called a complex, and according to IIT, such a system is a physical substrate of consciousness. It is therefore important to examine how the dual requirements for intrinsic differentiation and intrinsic specification affect the growth of φ s as system size increases. The previous example illustrated how the introduction of intrinsic differentiation results in a tension between indeterminism and determinism. The goal of this example is to understand how, if at all, this tension influences the potential for systems with a large number of units to be a maximum of φ s .
Consider a system of n units S in a state s such that its future state will be s with probability p and the remaining probability mass is spread uniformly across states, i.e., for all s ¯ Ω S we have
p e ( s ¯ s ) = p if s ¯ = s ( 1 p ) / ( 2 n 1 ) otherwise .
It follows that the unconstrained effect repertoire is uniform:
p e ( s ¯ ) = 1 2 n , s ¯ Ω S .
For such a system, the specified effect state is s e = s , and the corresponding intrinsic differentiation and specification are
i diff e ( s , s ) = log ( p )
and
i spec e ( s , s ) = p log ( p 2 n ) .
The value of p that maximizes the intrinsic effect information ( p * , the intersection of Equations (31) and (30)) decreases as the system size increases (Figure 3A), resulting in increased intrinsic differentiation. Additionally, the log ratio between p * and the 1 / ( 2 n ) increases faster than p * decreases (Figure 3B), resulting in increased intrinsic specification. Finally, the ratio between p * and ( 1 p ) / ( 2 n 1 ) is increasing with system size (Figure 3C), despite the decrease in p * in absolute terms. This indicates that, for this class of systems, intrinsic information is maximized in the large-n limit when the system specifies one effect state with probability p exceeding that of all alternatives.
To further explore the above theoretical argument, we consider an example system of n = 6 units (Figure 3D) that was originally presented in [1] (Figure 6D in the referenced paper). The example includes a determinism (temperature) parameter K = 4 which results in the full 6-unit system being identified as the complex. In the current work, we analyzed this system using the updated account of φ s and examined the behavior of the system as K is varied. The size of the largest complex varies with K, with the full 6-unit system being a complex for K 0.775 and the system breaking down into two-unit complexes for K 0.775 . It is worth noting that this is not due to the introduction of intrinsic differentiation, as the intrinsic differentiation only affects φ s when K 2.839 (Figure 3F).

3.3. Example 3: Intrinsic Units

The final example concerns IIT’s requirement that a complex is the set of units that maximizes φ s across all subsets, and across all grains. The framework for assessing cause–effect power across grains is described in [6]. A key aspect of the framework is that potential macro units must satisfy the postulates by being ‘maximally irreducible within’, which essentially means that when treated as a subsystem they have greater φ s than any potential subsystems that can be defined from within (but not necessarily any supersets or partially overlapping sets). In this example, we revisit the question of whether cause–effect power can peak at macro grains with the updated account of φ s , and whether intrinsic differentiation plays a role in determining if a unit satisfies the maximally irreducible within condition, or if a system of macro units has greater φ s than the corresponding micro units.
For this example, we recreate a minimal example from [6] (Figure 4 in the cited paper). The example starts with a two-unit system of micro units S = { A , B } that each approximate the function of an imperfect AND gate (Figure 4A). When both A and B are in the OFF state s = ( 0 , 0 ) , then they will be ON in the future state with probability p. When A is ON but B is OFF, the probability that A’s future state is ON remains p, but the probability that B’s future state is ON is marginally increased to p + 0.01 (and vice versa when A is OFF but B is ON). When both A and B are ON, their future states will be ON with probability 1 p .
In [6], the example is presented using p = 0.05 . In this case, α = { A , B } (Figure 4B) satisfies the requirement of maximally irreducible within. When considered as a macro system S = { α } , it has greater cause–effect power than the corresponding micro system S = { A , B } . In the current work, we reanalyze the system with the inclusion of intrinsic differentiation and compute φ s for p ( 0 , 0.5 ) . The potential macro unit α = { A , B } satisfies the maximally irreducible within criteria for all p ( 0 , 0.5 ) , and, moreover, the macro monad has greater φ s than the corresponding micro system for p ( 0.096 , 0.5 ) (Figure 4C).
Thus, with the introduction of intrinsic differentiation, it is still possible to define intrinsic macro units, and those units may have greater cause–effect power than the corresponding micro systems. However, the outcome depends on the level of determinism in the system, and it may be the case that the intrinsic differentiation at the macro level prevents the macro system from outperforming the micro system.

4. Discussion

In this work, we refined the operational account of intrinsic cause–effect power within IIT by introducing the complementary concepts of intrinsic differentiation and intrinsic specification. We argued that a system must both provide itself with a repertoire of alternatives and specify one among them in order to have intrinsic existence. Building on these requirements, we defined intrinsic information as the minimum of differentiation and specification, and incorporated this measure into the calculation of φ s . Through worked examples ranging from single-unit systems (monads) to larger complexes and macro-level units, we illustrated how the revised framework clarifies the role of indeterminism in IIT, and its impact on identifying complexes. Together, these results present a more complete account of the conditions under which systems can be said to possess intrinsic cause–effect power.
The central motivation for this refinement is conceptual: to ensure that the mathematical formulation of intrinsic cause–effect power aligns with IIT’s postulates. Among these, the postulate of intrinsicality is primary: existence (intrinsic cause–effect power) must be defined from the perspective of the system itself, not relative to an outside observer. By explicitly requiring both differentiation (the system provides itself with a repertoire of alternative states) and specification (the system specifies one among those alternatives), the revised framework secures a closer link between the formal measures and the axioms they are intended to capture. In this way, intrinsic information captures precisely the two requirements that a system must satisfy to exist intrinsically.
The present work builds on a broader program to formalize measures of causation. Effective information (EI) [12] quantifies a system’s causal structure as an aggregate measure averaged over all states; the actual causation framework [13] addresses a complementary question: what caused a specific occurrence? Both share the interventionist semantics employed here but differ in aim: EI characterizes overall causal structure, actual causation traces specific transitions, and the present measures assess whether a system has intrinsic cause–effect power, the requirement for intrinsic existence.
A key consequence of introducing intrinsic differentiation is that it renders explicit a tradeoff between determinism and indeterminism. Intrinsic differentiation essentially captures indeterminism in the system, however without the standard interpretation: rather than something extra (‘noise’) added to the system, the indeterminism is intrinsic to the system and a requirement for its existence. Purely deterministic systems provide no genuine alternatives, and thus their intrinsic differentiation is zero, while purely random systems specify no state, leaving intrinsic specification at zero. Only systems that balance these two extremes can have nontrivial intrinsic information. This principle was clearly illustrated in the monad example, where integrated information peaked at an intermediate p, and it carried through to larger systems: as system size increased, differentiation naturally grew but had to be contrasted with sufficient specification. Similarly, in the analysis of macro systems, the extent to which coarse-grained units outperformed their micro constituents depended on whether an appropriate balance between determinism and indeterminism was maintained.
The requirement of intrinsic differentiation holds regardless of the spatiotemporal grain of the system. Quantum mechanics provides a well-established paradigm for micro-grain differentiation, most clearly in the context of collapse models that posit a wave function supporting potential alternative states [14]. The interpretation of these models, and the precise ontological status of quantum indeterminism, remains subject to debate, with some accounts aligned with the current work (e.g., promoting indeterminism as an intrinsic feature of physical reality [15]).
At the macro grain, differentiation does not simply disappear once it is present at the micro grain, but is instead reshaped by the process of mapping from micro to macro. Some of this differentiation is a direct extension of the differentiation of the underlying micro grain, inherited when micro states are grouped into a macro state. Yet, because cause–effect power is assessed from the intrinsic perspective of the macro grain, additional sources of differentiation arise. One is the percolation of background conditions across macro updates, where fluctuations external to the macro unit mediate cause–effect power. Another stems from uncertainty about the precise initial configuration and dynamic evolution of micro units constituting a macro unit, since many micro patterns may correspond to the same macro outcome. Together, these factors mean that macro-level systems often display more intrinsic differentiation than their micro counterparts.
The requirement of some degree of differentiation (i.e., indeterminism) also resonates with longstanding ideas about criticality and metastability in brain dynamics [16]. Neural systems operate near the edge of chaos, where activity is neither frozen into rigid determinism nor dissolved into unstructured randomness [17]. The present framework emphasizes that differentiation is not a nuisance to be minimized, but a necessary ingredient for intrinsic existence. A balance between differentiation and specification (i.e., indeterminism and determinism) provides the conditions under which neural substrates can maximize their intrinsic cause–effect power, revealing a further alignment of the principles of IIT with accounts of criticality in complex biological systems [18]. The requirement for a repertoire of alternatives also resonates with the notion of empowerment [19], which quantifies the channel capacity between an agent’s actions and its future sensor states. However, empowerment is an extrinsic, observer-defined quantity, whereas intrinsic differentiation is assessed from the system’s own perspective; the two thus capture related but distinct aspects of a system’s causal structure (for a contrast between intrinsic and extrinsic notions of information, see [20]). More broadly, comparison with other unifying frameworks, such as the free energy principle and active inference, remains an important direction for future work.
The balance between differentiation and specification also relates to agency and control: intrinsic differentiation corresponds to the availability of alternatives, while intrinsic specification corresponds to the ability to select a particular outcome. In IIT’s intrinsic-powers view, this is the relevant sense in which ‘true alternatives’ and ‘true decisions’, and thus free will in an intrinsic, operational sense, can be understood in terms of the causal power of an entity [21].

Author Contributions

Conceptualization, W.G.P.M., W.M., and G.T.; methodology, W.G.P.M. and W.M.; software, W.G.P.M.; validation, W.G.P.M. and W.M.; formal analysis, W.G.P.M. and W.M.; investigation, W.G.P.M. and W.M.; resources, G.T.; data curation, W.G.P.M.; writing—original draft preparation, W.G.P.M., W.M., and G.T.; writing—review and editing, W.G.P.M., W.M., and G.T.; visualization, W.G.P.M.; supervision, W.M. and G.T.; project administration, W.M. and G.T.; funding acquisition, G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the David P. White Chair in Sleep Medicine at the University of Wisconsin–Madison (G.T.) and by the Natural Science and Engineering Research Council of Canada (NSERC; RGPIN-2019-05418; W.M.).

Data Availability Statement

Code and data used in this work are available at https://github.com/wmayner/intrinsic-information (27 December 2025).

Acknowledgments

We thank Larissa Albantakis and Graham Findlay for their valuable comments on the manuscript.

Conflicts of Interest

G.T. holds an executive position and has a financial interest in Intrinsic Powers, Inc., a company whose purpose is to develop a device that can be used in the clinic to assess the presence and absence of consciousness in patients. This does not pose any conflicts of interest with regard to the work undertaken for this publication. W.G.P.M. and W.M. declare no conflicts of interest. The funders of this work had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Albantakis, L.; Barbosa, L.; Findlay, G.; Grasso, M.; Haun, A.M.; Marshall, W.; Mayner, W.G.; Zaeemzadeh, A.; Boly, M.; Juel, B.E.; et al. Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms. PLoS Comput. Biol. 2023, 19, e1011465. [Google Scholar] [CrossRef] [PubMed]
  2. Marshall, W.; Grasso, M.; Mayner, W.G.P.; Zaeemzadeh, A.; Barbosa, L.S.; Chastain, E.; Findlay, G.; Sasai, S.; Albantakis, L.; Tononi, G. System integrated information. Entropy 2023, 25, 334. [Google Scholar] [CrossRef] [PubMed]
  3. Tononi, G. An information integration theory of consciousness. BMC Neurosci. 2004, 5, 42. [Google Scholar] [CrossRef] [PubMed]
  4. Balduzzi, D.; Tononi, G. Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework. PLoS Comput. Biol. 2008, 4, e1000091. [Google Scholar] [CrossRef] [PubMed]
  5. Oizumi, M.; Albantakis, L.; Tononi, G. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Comput. Biol. 2014, 10, e1003588. [Google Scholar] [CrossRef] [PubMed]
  6. Marshall, W.; Findlay, G.; Albantakis, L.; Tononi, G. Intrinsic Units: Identifying a System’s Causal Grain. bioRxiv 2024. [Google Scholar] [CrossRef]
  7. IIT Wiki. IIT—An Online Resource on Integrated Information Theory. Available online: https://www.iit.wiki (accessed on 25 September 2025).
  8. Barbosa, L.S.; Marshall, W.; Streipert, S.; Albantakis, L.; Tononi, G. A measure for intrinsic information. Sci. Rep. 2020, 10, 18803. [Google Scholar] [CrossRef] [PubMed]
  9. Pearl, J. Causality: Models, Reasoning, and Inference, 2nd ed.; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar] [CrossRef]
  10. Mayner, W.G.P.; Marshall, W.; Albantakis, L.; Findlay, G.; Marchman, R.; Tononi, G. PyPhi: A Toolbox for Integrated Information Theory. PLoS Comput. Biol. 2018, 14, e1006343. [Google Scholar] [CrossRef] [PubMed]
  11. Leibniz, G.W. The Monadology. In Philosophical Essays; Ariew, R.; Garber, D., Translators; Hackett Publishing: Indianapolis, IN, USA, 1989. [Google Scholar]
  12. Hoel, E.P.; Albantakis, L.; Tononi, G. Quantifying Causal Emergence Shows That Macro Can Beat Micro. Proc. Natl. Acad. Sci. USA 2013, 110, 19790–19795. [Google Scholar] [CrossRef] [PubMed]
  13. Albantakis, L.; Marshall, W.; Hoel, E.; Tononi, G. What Caused What? A Quantitative Account of Actual Causation Using Dynamical Causal Networks. Entropy 2019, 21, 459. [Google Scholar] [CrossRef] [PubMed]
  14. Bassi, A.; Dorato, M.; Ulbricht, H. Collapse Models: A Theoretical, Experimental and Philosophical Review. Entropy 2023, 25, 645. [Google Scholar] [CrossRef] [PubMed]
  15. Santo, F.D.; Gisin, N. Potentiality Realism: A Realistic and Indeterministic Physics Based on Propensities. Eur. J. Philos. Sci. 2023, 13, 58. [Google Scholar] [CrossRef] [PubMed]
  16. Wilting, J.; Priesemann, V. 25 years of criticality in neuroscience. Curr. Opin. Neurobiol. 2019, 58, 105–111. [Google Scholar] [CrossRef] [PubMed]
  17. Beggs, J.M.; Plenz, D. Neuronal avalanches in neocortical circuits. J. Neurosci. 2003, 23, 11167–11177. [Google Scholar] [CrossRef] [PubMed]
  18. Mediano, P.A.; Seth, A.K.; Barrett, A.B. Integrated information as a common signature of dynamical complexity. Chaos Interdiscip. J. Nonlinear Sci. 2022, 32, 013115. [Google Scholar] [CrossRef] [PubMed]
  19. Klyubin, A.S.; Polani, D.; Nehaniv, C.L. Empowerment: A Universal Agent-Centric Measure of Control. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 128–135. [Google Scholar] [CrossRef]
  20. Zaeemzadeh, A.; Tononi, G. Shannon Information and Integrated Information: Message and Meaning. arXiv 2024. [Google Scholar] [CrossRef]
  21. Tononi, G.; Albantakis, L.; Boly, M.; Cirelli, C.; Koch, C. Only What Exists Can Cause: An Intrinsic View of Free Will. arXiv 2022. [Google Scholar] [CrossRef]
Figure 1. Intrinsic information captures the tradeoff between determinism and indeterminism. (A) Causal diagrams of three systems, each consisting of three elements a, b, and c. Arrows indicate the direction and strength of causal connections among the elements. The systems have different dynamics: fully deterministic (top, uniform connectivity), uniformly random (middle, no connectivity), and a system balanced between those extremes (bottom, nonuniform connectivity). (B) The TPMs that describe each system. (C) Effect repertoires of each system for the state 000. (D) Intrinsic differentiation i diff , intrinsic specification i spec , and intrinsic information ii ( s ) are shown for each repertoire, illustrating how intrinsic information captures the tradeoff between determinism and indeterminism. The fully deterministic system has i diff ( s ) = 0 , and thus ii ( s ) = 0 . The uniformly random system has i spec ( s ) = 0 and so also has ii ( s ) = 0 . The balanced system, which defines several possible alternative effect states ( i diff ( s ) > 0 ) while also specifying one of them ( i spec ( s ) > 0 ), achieves positive intrinsic information ii ( s ) > 0 .
Figure 1. Intrinsic information captures the tradeoff between determinism and indeterminism. (A) Causal diagrams of three systems, each consisting of three elements a, b, and c. Arrows indicate the direction and strength of causal connections among the elements. The systems have different dynamics: fully deterministic (top, uniform connectivity), uniformly random (middle, no connectivity), and a system balanced between those extremes (bottom, nonuniform connectivity). (B) The TPMs that describe each system. (C) Effect repertoires of each system for the state 000. (D) Intrinsic differentiation i diff , intrinsic specification i spec , and intrinsic information ii ( s ) are shown for each repertoire, illustrating how intrinsic information captures the tradeoff between determinism and indeterminism. The fully deterministic system has i diff ( s ) = 0 , and thus ii ( s ) = 0 . The uniformly random system has i spec ( s ) = 0 and so also has ii ( s ) = 0 . The balanced system, which defines several possible alternative effect states ( i diff ( s ) > 0 ) while also specifying one of them ( i spec ( s ) > 0 ), achieves positive intrinsic information ii ( s ) > 0 .
Entropy 28 00410 g001
Figure 2. Example 1: (A) A single-unit system (a monad). The unit stays in the current state with probability p, and switches to its other state with probability 1 p . (B) The transition probability matrix when p = 0.9 . (C) The intrinsic differentiation and intrinsic specification of the monad as a function of p. The intrinsic information (and thus integrated information) is maximized at p = 0.744 , where φ s = 0.427 .
Figure 2. Example 1: (A) A single-unit system (a monad). The unit stays in the current state with probability p, and switches to its other state with probability 1 p . (B) The transition probability matrix when p = 0.9 . (C) The intrinsic differentiation and intrinsic specification of the monad as a function of p. The intrinsic information (and thus integrated information) is maximized at p = 0.744 , where φ s = 0.427 .
Entropy 28 00410 g002
Figure 3. Example 2: Exploring the behavior of intrinsic information as a function of system size. Consider a system of n units that specifies a future state with probability p, with the remaining mass spread uniformly across potential alternative states. (A) The value p that optimizes intrinsic information ( p * ), as a function of system size. (B) The log ratio of p * to the probability of alternative states as a function of system size. (C) The value of intrinsic information at p = p * as a function system size. Taken together, we see that although p * decreases with system size, the ratio between it and the probability of alternative states is increasing, resulting in increasing intrinsic information as system size increases. (D) An example 6-unit system originally analyzed in [1]. The system has a temperature parameter K that modulates the intrinsic differentiation of the system. (E) A partial transition probability matrix for the system, highlighting that the system specifies a particular state in each row. (F) As K increases, the intrinsic differentiation decreases and the intrinsic specification increases. (G) Integrated information φ s is determined by intrinsic differentiation for low values of K, and intrinsic specification for high values of K. Overall, φ s is maximized at an intermediate value of K, balancing intrinsic differentiation and specification.
Figure 3. Example 2: Exploring the behavior of intrinsic information as a function of system size. Consider a system of n units that specifies a future state with probability p, with the remaining mass spread uniformly across potential alternative states. (A) The value p that optimizes intrinsic information ( p * ), as a function of system size. (B) The log ratio of p * to the probability of alternative states as a function of system size. (C) The value of intrinsic information at p = p * as a function system size. Taken together, we see that although p * decreases with system size, the ratio between it and the probability of alternative states is increasing, resulting in increasing intrinsic information as system size increases. (D) An example 6-unit system originally analyzed in [1]. The system has a temperature parameter K that modulates the intrinsic differentiation of the system. (E) A partial transition probability matrix for the system, highlighting that the system specifies a particular state in each row. (F) As K increases, the intrinsic differentiation decreases and the intrinsic specification increases. (G) Integrated information φ s is determined by intrinsic differentiation for low values of K, and intrinsic specification for high values of K. Overall, φ s is maximized at an intermediate value of K, balancing intrinsic differentiation and specification.
Entropy 28 00410 g003
Figure 4. Example 3: Intrinsic units. (A) A two-unit micro system implementing an imperfect AND logic with parameter p. The transition probability matrix is shown for p = 0.1 . (B) The system S = { a , b } satisfies the ‘maximally irreducible within’ condition of [6], and can thus be considered as a macro unit. The macro unit α is defined by mapping states ( a , b ) = ( 0 , 0 ) , ( 0 , 1 ) , and ( 1 , 0 ) to α = 0 , and ( a , b ) = ( 1 , 1 ) to α = 1 . The resulting transition probability matrix for α is shown. (C) φ s of (A) and (B) for different values of p ( 0 , 0.5 ) . For p < 0.096 , we find φ s ( { a , b } ) > φ s ( α ) , while for p > 0.096 , φ s ( α ) > φ s ( { a , b } ) . Thus, whether the macro or micro system has higher integrated information depends on intrinsic differentiation (via p).
Figure 4. Example 3: Intrinsic units. (A) A two-unit micro system implementing an imperfect AND logic with parameter p. The transition probability matrix is shown for p = 0.1 . (B) The system S = { a , b } satisfies the ‘maximally irreducible within’ condition of [6], and can thus be considered as a macro unit. The macro unit α is defined by mapping states ( a , b ) = ( 0 , 0 ) , ( 0 , 1 ) , and ( 1 , 0 ) to α = 0 , and ( a , b ) = ( 1 , 1 ) to α = 1 . The resulting transition probability matrix for α is shown. (C) φ s of (A) and (B) for different values of p ( 0 , 0.5 ) . For p < 0.096 , we find φ s ( { a , b } ) > φ s ( α ) , while for p > 0.096 , φ s ( α ) > φ s ( { a , b } ) . Thus, whether the macro or micro system has higher integrated information depends on intrinsic differentiation (via p).
Entropy 28 00410 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mayner, W.G.P.; Marshall, W.; Tononi, G. Intrinsic Cause–Effect Power: The Tradeoff Between Differentiation and Specification. Entropy 2026, 28, 410. https://doi.org/10.3390/e28040410

AMA Style

Mayner WGP, Marshall W, Tononi G. Intrinsic Cause–Effect Power: The Tradeoff Between Differentiation and Specification. Entropy. 2026; 28(4):410. https://doi.org/10.3390/e28040410

Chicago/Turabian Style

Mayner, William G. P., William Marshall, and Giulio Tononi. 2026. "Intrinsic Cause–Effect Power: The Tradeoff Between Differentiation and Specification" Entropy 28, no. 4: 410. https://doi.org/10.3390/e28040410

APA Style

Mayner, W. G. P., Marshall, W., & Tononi, G. (2026). Intrinsic Cause–Effect Power: The Tradeoff Between Differentiation and Specification. Entropy, 28(4), 410. https://doi.org/10.3390/e28040410

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop