# Mathematical Models of Consciousness

## Abstract

**:**

## 1. Introduction

#### 1.1. The Rising Importance of Mathematics in Consciousness Studies

#### 1.2. What Makes Consciousness a Problem

#### 1.3. The Need for a Mathematical Foundation

#### 1.4. A Framework for Formulating Models of Consciousness

#### 1.5. An Axiomatic Conceptual Underpinning

#### 1.6. A New Way of Consciousness Science

#### 1.7. The Structure of this Article

## 2. Summary of Results

**Definition**

**1.**

- -
- an explicit definition of what is to be studied.
- -
- an explicit outline of the methodology.

**Definition**

**6.**

“Many scientific discoveries have been delayed over the centuries for the lack of a mathematical language that can amplify ideas and let scientists communicate results.” [34] (p. 427) |

## 3. Basic Definitions

#### 3.1. Conscious Experience and Qualia

**Definition**

**2.**

**Definition**

**3.**

**Example**

**1.**

**Definition**

**4.**

**Definition**

**5.**

**Phenomenological Axiom**

**1.**

- (a)
- Aspects of experience that are non-collatable.
- (b)
- Aspects of experience that are collatable.

**Example**

**2.**

**Example**

**3.**

**Definition**

**6.**

**Example**

**4.**

**Example**

**5.**

“[F]undamentally an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for that organism. We may call this the subjective character of experience.” (p. 436)

#### 3.2. Formal Representation of Experience

**Phenomenological Axiom**

**2.**

**Example**

**6.**

**Phenomenological Axiom**

**3.**

**Example**

**7.**

- ▸
- Similarity: Two qualia can be more or less similar;
- ▸
- Intensity: A quale can occur in more or less intense versions;

**Example**

**8.**

**Convention**

**1.**

#### 3.3. References to Qualia

#### 3.3.1. Preparation: References that Ignore Relations

#### 3.3.2. Taking Relations into Account

**Remark**

**1.**

#### 3.4. A Phenomenological Grounding of the Scientific Study of Consciousness

**Definition**

**7.**

#### 3.5. Examples

**Example**

**9.**

**Example**

**10**

**Example**

**11**

**Example**

**12**

**Example**

**13**

- (A1)
- We assume that with respect to any two qualia of one experiencing subject, the experiencing subject might have an experience that has exactly these qualia as ineffable aspects.

- (A2)
- We assume that there is a unique neutral quale, which we denote by “0”. Furthermore, we assume that for every quale e, there is a quale $-e$ such that an experience that includes both e and $-e$ is not distinguishable from (and hence equal to) the experience of the neutral quale.

- ▸
- (A1) and (A2) imply that $\oplus :E\times E\to E$ is an abelian group.

- ▸
- The intensity relation of Phenomenal Axiom 3 may be taken to give rise to a scalar multiplication $\odot :\mathbb{R}\times E\to E$.

- ▸
- $(E,\oplus ,\odot )$ satisfies the axioms of a vector space.

- ▸
- $(E,\oplus ,\odot ,\langle .,.\rangle )$ satisfies the axioms of an inner product space or pre-Hilbert space.

- ▸
- The experience space E carries the structure of a real Hilbert space $\overline{(E,\oplus ,\odot ,\langle .,.\rangle )}$, which we denote by ${\mathcal{H}}_{E}$.

## 4. Explanatory Gap

## 5. The Mathematical Structure of Models of Consciousness

#### 5.1. The Mathematical Structure of Scientific Theories

**Definition**

**8.**

- ▸
- A set of dynamical variables d (Those quantities whose variation is determined by T to some extent.24
- ▸
- Some background structure b. (Variables, or general mathematical structures, whose change is not determined by T. Background structure needs to be fixed in order to determine the variation of d in a particular application.) The variation or change of the dynamical variable of a theory can be expressed with respect to some parameter t that takes values in some set $\mathcal{I}$. Typically, the parameter is assumed continuous and interpreted as time. However, this is not necessary: the set $\mathcal{I}$ may or may not carry some mathematical structure (such as a topology), and it may or may not be interpretable as time.
- ▸
- A set of kinematically possible trajectories $\mathcal{K}$. Sometimes, this includes all possible trajectories, $\mathcal{K}=\left\{{\left({d}_{t}\right)}_{t\in \mathcal{I}}\right\}$, but in many cases, trajectories need to satisfy certain mathematical requirements, such as differentiability with respect to the parameter t.
- ▸
- Some laws $\mathcal{L}$. (Typically equations or variational principles, but $\mathcal{L}$ may also include different formal ingredients (such as those provided by category theory) or even non-formal ingredients, as required by pragmatic accounts of scientific theories.)
- ▸
- A set of dynamically possible trajectories $\mathcal{D}$, which we also call solutions of T. These are those kinematically possible trajectories ($\mathcal{D}\subset \mathcal{K}$) that are selected by the theory’s laws in a particular application of the theory, given some choice of background structure and possibly taking into account some “nonformal patterns in theories” [46] (p. 55).

**Definition**

**9.**

- (a)
- There is an effective action $\mathcal{G}\times \mathcal{K}\to \mathcal{K}$ of $\mathcal{G}$ on $\mathcal{K}$.25
- (b)
- This action leaves the the solutions $\mathcal{D}$ of T invariant.

#### 5.2. Models of Consciousness

**Definition**

**10.**

- (i)
- The dynamical variables are a Cartesian product of the physical state space P of ${T}_{P}$ together with one copy of the experience space E for each experiencing subject,$$d=\underset{experiencing\phantom{\rule{4.pt}{0ex}}subjects\phantom{\rule{4.pt}{0ex}}}{\underbrace{E\times E\times \dots \times E}}\times P\phantom{\rule{0.222222em}{0ex}}.$$
- (ii)
- Kinematically possible trajectories $\mathcal{K}$ are a subset of families of dynamical variables,$$\mathcal{K}\subset \left\{{\left({e}_{t}^{1},{e}_{t}^{2},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{t}^{n},{p}_{t}\right)}_{t\in \mathcal{I}}\phantom{\rule{0.166667em}{0ex}}\right\}\phantom{\rule{0.222222em}{0ex}},$$

#### 5.3. Notation

## 6. Taking Characteristic Features of Conscious Experience into Account

#### 6.1. Non-Collatability Implies Symmetry

**Remark**

**2.**

- Passive meaning of $\sigma $: k and ${\sigma}_{\overline{s}}\left(k\right)$ are the same trajectory expressed in different labeling.
- Active meaning of $\sigma $: k and ${\sigma}_{\overline{s}}\left(k\right)$ are different trajectories expressed in the same labeling.

**Definition**

**11.**

**Lemma**

**1.**

**Proof.**

#### 6.2. The Mathematical Structure of Models of Consciousness Revisited

**Definition**

**12.**

**Example**

**14.**

#### 6.3. Comparison with Direct Reference

**Lemma**

**2.**

**Proof of Lemma**

**2.**

## 7. Closure of the Physical

**Definition**

**13.**

## 8. Examples

#### 8.1. Integrated Information Theory

“[T]he central identity [of IIT] is the following: The maximally irreducible conceptual structure (MICS) generated by a [subsystem S] is identical to its experience. The constellation of concepts of the MICS completely specifies the quality of the experience (...). Its irreducibility ${\mathsf{\Phi}}^{\mathrm{max}}$ specifies its quantity.” [7] (p. 3).

#### 8.1.1. Integrated Information-Induced Quantum Collapse

#### 8.2. Global Neuronal Workspace Theory

- [D1]
- A set ${N}_{\mathrm{ISG}}\subset {N}_{v}$ of components of the physical system constitutes an irreducible subgraph (ISG) if there is a directed edge between any ordered pair of components in this set [51] (p. 25).

- [N1]
- The system S needs to contain two disjoint subsets ${N}_{p},{N}_{g}\subset {N}_{v}$ of components: first, a set ${N}_{p}$ of components whose induced subnetwork is a network of ISGs, where the inter-ISG-connections are feed-forward only; second, a set ${N}_{g}$ of components with directed edges going from this set into all ISGs, and directed edges going to this set from all ISGs.

- [N2]
- The induced subnetwork of ${N}_{g}$ needs to be such that at any time t, its state “represents” only one of the ISGs’ dynamical attractors ${m}_{k}\left(t\right)$.

#### 8.3. Conscious Agent Networks

- ▸
- X is a space that describes possible experiences of the conscious agent. Each element $x\in X$ represents a particular experience.
- ▸
- G is a space that describes dispositions or intentions to act. Each element $g\in G$ corresponds to an action the agent has decided to carry out.
- ▸
- $P:W\to X$ is a map that describes the agent’s “process of perception” [20] (p. 6). It specifies what the conscious agent experiences in response to the “world” being in a particular state $w\in W$.
- ▸
- $D:X\to G$ is a map that models how the experience of the agent determines its disposition for an action, i.e., “the process of decision [in which] a conscious agent chooses what actions to take based on the conscious experiences it has.” (ibid.).
- ▸
- $A:G\to W$ describes how the agent’s disposition for an action “is carried out”, i.e., how it affects the world: “In the process of action, the conscious agent interacts with the world in light of the decision it has taken, and affects the state of the world” (ibid.).

- (a)
- Whether the model would like to address aspects of experience that are non-collatable.
- (b)
- Whether the theory would (eventually or in principle) like to make predictions with respect to experiments that involve (reports of) conscious agents.

#### 8.4. Expected Float Entropy Minimization

## 9. Conclusions and Outlook

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Chalmers’ Grounding of the Scientific Study of Consciousness

- (E1)
- An explanation specifies the function and structure of an explanandum in terms of the the function and structure of accepted theoretical notions.

- (D1)
- Phenomenal aspects of consciousness are those aspects of conscious experience which do not have a function or structure, where ‘function’ and ‘structure’ are as defined above.

## Appendix B. Conceptual Problems of Chalmers’ Grounding

#### Appendix B.1. Closure of the Physical

**Remark**

**A1.**

- (F1)
- A quantity $a\in \{q,c,{m}_{1},{m}_{2}\}$ is functionally dependent on a quantity $b\in \{q,c,{m}_{1},{m}_{2}\}$ according to some model of the MTCp setup iff according to this model, a is a non-constant function of b.34

- (A2)
- In both Chalmers’ grounding (CG) and the phenomenological grounding (PG), the assumption of the closure of the physical (ACoP) implies that the states c of communication channels cannot be functionally dependent on q.35

- (C1)
- A necessary condition for communication between ${\mathcal{S}}_{1}$ and ${\mathcal{S}}_{2}$ about $\mathcal{Q}$ is that ${m}_{2}$ may depend functionally on q.

#### Appendix B.2. Experiments

#### Appendix B.3. Subsumed Notion of Explanation

#### Appendix B.4. Causality

## References

- Faw, B. Consciousness, modern scientific study of. In The Oxford Companion to Consciousness; Bayne, T., Cleeremans, A., Wilken, P., Eds.; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
- Seth, A. Models of consciousness. Scholarpedia
**2007**, 2, 1328. [Google Scholar] [CrossRef] - Dehaene, S.; Kerszberg, M.; Changeux, J.P. A neuronal model of a global workspace in effortful cognitive tasks. Proc. Natl. Acad. Sci. USA
**1998**, 95, 14529–14534. [Google Scholar] [CrossRef] [PubMed][Green Version] - Baars, B.J. Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Prog. Brain Res.
**2005**, 150, 45–53. [Google Scholar] [PubMed] - Dennett, D.C. Consciousness Explained; Penguin: London, UK, 1993. [Google Scholar]
- Carruthers, P. Higher-Order Theories of Consciousness. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2016. [Google Scholar]
- Oizumi, M.; Albantakis, L.; Tononi, G. From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Comput. Biol.
**2014**, 10, e1003588. [Google Scholar] [CrossRef][Green Version] - Mayner, W.G.P.; Marshall, W.; Albantakis, L.; Findlay, G.; Marchman, R.; Tononi, G. PyPhi: A toolbox for integrated information theory. PLoS Comput. Biol.
**2018**, 14, e1006343. [Google Scholar] [CrossRef] - Haun, A.; Tononi, G. Why does space feel the way it does? Towards a principled account of spatial experience. Entropy
**2019**, 21, 1160. [Google Scholar] [CrossRef][Green Version] - Kleiner, J.; Tull, S. The mathematical structure of integrated information theory. arXiv
**2020**, arXiv:2002.07655. [Google Scholar] - Tull, S.; Kleiner, J. Integrated information in process theories. arXiv
**2020**, arXiv:2002.07654. [Google Scholar] - Metzinger, T.; Wiese, W. Philosophy and Predictive Processing; Number 978-3-95857-138-9; MIND Group: Frankfurt am Main, Germany, 2017. [Google Scholar]
- Dolkega, K.; Dewhurst, J.E. Fame in the predictive brain: A deflationary approach to explaining consciousness in the prediction error minimization framework. Synthese
**2020**. [Google Scholar] [CrossRef][Green Version] - Chalmers, D.; McQueen, K. Consciousness and the collapse of the wave function. In Quantum Mechanics and Consciousness; Oxford University Press: New York, NY, USA, forthcoming.
- Kent, A. Quanta and qualia. Found. Phys.
**2018**, 48, 1021–1037. [Google Scholar] [CrossRef][Green Version] - Kent, A. Toy Models of Top Down Causation. arXiv
**2019**, arXiv:1909.12739. [Google Scholar] - Penrose, R. Shadows of the Mind; Oxford University Press: Oxford, UK, 1994; Volume 4. [Google Scholar]
- Kremnizer, K.; Ranchin, A. Integrated Information-induced quantum collapse. Found. Phys.
**2015**, 45, 889–899. [Google Scholar] [CrossRef][Green Version] - Mason, J.W.D. Quasi-conscious multivariate systems. Complexity
**2016**, 21, 125–147. [Google Scholar] [CrossRef][Green Version] - Hoffman, D.D.; Prakash, C. Objects of consciousness. Front. Psychol.
**2014**, 5, 577. [Google Scholar] [CrossRef] [PubMed][Green Version] - Metzinger, T. The problem of consciousness. In Conscious Experience; Metzinger, T., Ed.; Imprint Academic: Exeter, UK, 1995; pp. 3–37. [Google Scholar]
- Nagel, T. What is it like to be a bat? Philos. Rev.
**1974**, 83, 435. [Google Scholar] [CrossRef] - Lewis, C. Mind and the World Order; C. Scribner’s Sons: New York, NY, USA, 1929. [Google Scholar]
- Metzinger, T. Grundkurs Philosophie des Geistes Band 1–3; Mentis: Paderborn, Germany, 2007. [Google Scholar]
- Metzinger, T. Conscious Experience; Imprint Academic: Exeter, UK, 1995. [Google Scholar]
- Tsuchiya, N.; Taguchi, S.; Saigo, H. Using category theory to assess the relationship between consciousness and integrated information theory. Neurosci. Res.
**2016**, 107, 1–7. [Google Scholar] [CrossRef][Green Version] - Tononi, G. Consciousness as integrated information: A provisional manifesto. Biol. Bull.
**2008**, 215, 216–242. [Google Scholar] [CrossRef] - Atmanspacher, H. On macrostates in complex multi-scale systems. Entropy
**2016**, 18, 426. [Google Scholar] [CrossRef] - Resende, P. Quanta and Qualia. In Proceedings of the Workshop on Combining Viewpoints in Quantum Theory, ICMS, Edinburgh, UK, 19–22 March 2018. Talk provided by the author. [Google Scholar]
- Chalmers, D. The Conscious Mind: In Search of a Fundamental Theory; Oxford Univ. Press: New York, NY, USA, 1996. [Google Scholar]
- Chalmers, D. The Character of Consciousness; Philosophy of Mind, Oxford University Press: New York, NY, USA; Oxford, UK, 2010. [Google Scholar] [CrossRef][Green Version]
- Schlick, M. Form and content: An introduction to philosophical thinking. In Gesammelte Aufsätze 1926–1936; Gerold: Vienna, Austria, 1938. [Google Scholar]
- Kleiner, J.; Hoel, E. Falsification and consciousness. arXiv
**2020**, arXiv:2004.03541. [Google Scholar] - Pearl, J. Causality: Models, Reasoning, and Inference, 9th ed.; Univ. Press: Cambridge, UK, 2009. [Google Scholar]
- Churchland, P.M. Eliminative materialism and propositional attitudes. J. Philos.
**1981**, 78, 67–90. [Google Scholar] - Kuehni, R. Color spaces. Scholarpedia
**2010**, 5, 9606. [Google Scholar] [CrossRef] - Resnikoff, H.L. Differential geometry and color perception. J. Math. Biol.
**1974**, 1, 97–131. [Google Scholar] [CrossRef] - Provenzi, E. Principal Fiber Bundles and Geometry of Color Spaces. In Proceedings of the Second International Conference on Advances in Signal, Image and Video Processing, Barcelona, Spain, 21–25 May 2017. [Google Scholar]
- Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. COLOR Res. Appl.
**2004**, 30, 21–30. [Google Scholar] [CrossRef] - Prentner, R. Consciousness and topologically structured phenomenal spaces. Conscious. Cogn.
**2019**, 70, 25–38. [Google Scholar] [CrossRef][Green Version] - Pervin, W.J. Foundations of General Topology; Academic Press: Cambridge, MA, USA, 1964. [Google Scholar]
- nLab authors. Pretopological space. Revision 10. Available online: https://ncatlab.org/nlab/show/pretopological+space (accessed on 1 May 2020).
- Levine, J. Materialism and qualia: The explanatory gap. Pac. Philos. Q.
**1983**, 64, 354–361. [Google Scholar] [CrossRef] - Woodward, J. Scientific Explanation. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2017. [Google Scholar]
- Winther, R.G. The structure of scientific theories. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2016. [Google Scholar]
- Craver, C.F. Structures of Scientific Theories. In The Blackwell Guide to the Philosophy of Science; Machamer, P., Silberstein, M., Eds.; Blackwell Publishers: Hoboken, NJ, USA, 2002; Chapter 4; pp. 55–79. [Google Scholar] [CrossRef]
- Giulini, D. Concepts of symmetry in the work of wolfgang pauli. In Recasting Reality: Wolfgang Pauli’s Philosophical Ideas and Contemporary Science; Atmanspacher, H., Primas, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 33–82. [Google Scholar] [CrossRef][Green Version]
- Chalmers, D.J. Absent qualia, fading qualia, dancing qualia. In Conscious Experience; Metzinger, T., Ed.; Imprint Academic: Exeter, UK, 1995. [Google Scholar]
- Bishop, R.C. The Hidden Premise in the Causal Argument for Physicalism. Analysis
**2005**, 66, 44–52. [Google Scholar] [CrossRef] - Dehaene, S.; Changeux, J.P.; Naccache, L. The global neuronal workspace model of conscious access: From neuronal architectures to clinical applications. In Characterizing Consciousness: From Cognition to the Clinic? Dehaene, S., Christen, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 55–84. [Google Scholar] [CrossRef]
- Grindrod, P. On human consciousness: A mathematical perspective. Netw. Neurosci.
**2018**, 2, 23–40. [Google Scholar] [CrossRef] - Wallace, R. Consciousness: A Mathematical Treatment of the Global Neuronal Workspace Model; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Dehaene, S.; Naccache, L. Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition
**2001**, 79. [Google Scholar] [CrossRef] - Elitzur, A.C. Consciousness makes a difference: A reluctant dualist’s confession. In Irreducibly Conscious: Selected Papers on Consciousness; Batthyany, A., Elitzur, A.C., Eds.; Winter: Heidelberg, Germany, 2009. [Google Scholar]
- Floridi, L. Semantic conceptions of information. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2017. [Google Scholar]
- Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef][Green Version] - Schaffer, J. The metaphysics of causation. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2016. [Google Scholar]

1 | The term “grounding” is one of several translations of the German word “Grundlegung”. |

2 | No special focus on subjectivity is intended when using the term “experiencing subject”. Alternatively, one could use the term “experiencer”. We also remark that the meaning of “instant” is to be fixed during the model-building process. It could refer to physical just as well as to experiential instants of time. |

3 | The restriction to a class $\mathcal{C}$ of experiencing subjects is necessary because a phenomenological analysis of invariants of experience is always restricted to experiencing subjects that are similar in some respects: “[O]ne person can know or say of another what the quality of the other’s experience is. [However, this] ascription of experience is possible only for someone sufficiently similar to the object of ascription to be able to adopt his point of view” [22] (p. 442). However, the choice of class $\mathcal{C}$ is not a constraint for models of consciousness, but rather a starting point, i.e., a preliminary choice that informs the model-building process. Models may eventually allow one to determine which organisms experience. We note also that the name “phenomenological axiom” is a tribute to phenomenology rather than an attempt to condense the phenomenological method into a simple definition. |

4 | I.e., an aspect experienced by subject ${\mathcal{S}}_{1}$ is non-collatable iff there is a different experiencing subject ${\mathcal{S}}_{2}$ such that there is no reasonable method to determine with which aspect ${e}^{\prime}$ of ${\mathcal{S}}_{2}$ the aspect e of ${\mathcal{S}}_{1}$ is identical. Put yet in different terms, this is the case if there is no mapping from e to the aspects of experiences of other experiencing subjects that can reasonably be interpreted as establishing the identity of aspects. |

5 | When being presented with Phenomenological Axiom 1, scientists usually tend to think about how this can be derived from a theory of language. In our opinion, the more important task is to ground the underlying distinction in a thorough phenomenological analysis. We also remark that all formal constructions in this article are compatible with either of the classes in Phenomenological Axiom 1 being empty, even though this is most likely not the case. |

6 | We generally abbreviate “color aspects of experience” by “color experience”. |

7 | Note that throughout this section, assumptions are in fact conventions. E.g., this assumption can be satisfied by asking experiencing subjects (in an experiment, say) to choose labels as described. The assumptions can be made “without loss of generality”, so to speak. |

8 | Phenomenological Axiom 3 states that there are relations between qualia that are collatable. This expresses the observations in [22,30,32] that structural features of perception, relations between experiences, or the form of experience might be more accessible to communication or objective description. However, one might question whether this axiom is warranted and insist that relations between experiences are not (strictly, at least) collatable (thanks to an anonymous referee for pointing this out). The formalism developed here requires the collatability of relations, so that any non-collatable relation has to be ignored. |

9 | Similarity and intensity are simple examples of collatable relations between qualia. There may be many more collatable relations that express facts about how qualia appear in experience, some of which may only relate qualia of a particular type to each other. Further examples arguably include: Composition: Some qualia are experienced as a composition of two (or more) different qualia, i.e., the composed quale is but a combination (or simultaneous experience) of the composing qualia. Inclusion: Some qualia may be experienced as containing one (or more) other qualia. Here, the contained quale is but an aspect of the containing quale. Furthermore, the distinction between various types (visual, auditory, tactile, etc.) of non-collatable aspects of experience is a relation in the sense of Phenomenological Axiom 3. |

10 | Note that this example is complicated by the fact that we calibrate color experiences in practice: we apply or learn rules on how to pick color labels related to external events such as wavelength impinging on the eye. This will be discussed in detail in Example 9 below. What is crucial is that a priori, individual labels so chosen do not correlate with color experience: two experiencing subjects may have a completely different color experience despite using the same label “blue”. |

11 | Here, the term “mathematical space” is used to refer to a set that carries additional mathematical structure. Examples are metric spaces, topological spaces, vector spaces, differentiable manifolds, principal bundles, measurable spaces, and Hilbert spaces. |

12 | A unary relation on E is simply a subset of E. |

13 | For all practical purposes, one can obtain such a representation by simply asking one experiencing subject to pick a labeling and to report, in terms of this labeling, on his/her experienced relations. Other experiencing subjects are then required to choose labels according to this representation. For details, see Example 9. For explicit examples on how such a representation might look, cf. Examples 10–13. |

14 | One may even take this to be unlikely, given the difference of brain physiology and neuronal structure across individuals. |

15 | This is an experimental fact that is due to the biological details of the cone cells in the human eye. Since various mixtures $\overline{\lambda}$ evoke the same color experience, some conventions have to be made in order to fix the subset S uniquely (e.g., a choice of reference wavelengths). Furthermore, due to the particular responsivity curves of the cone cells, no finite set of wavelengths can be combined to achieve all colors that a human can experience. However, suitable experimental procedures exist so that all visible mixtures can be represented in ${\mathbb{R}}^{3}$ nevertheless [36]. |

16 | We take it that straight lines describe mixtures of color experiences, which have to be distinguished from the experience of mixtures of colors. Thanks to an anonymous referee for pointing this out. |

17 | |

18 | Since the ordering of distances between pairs of colors, rather than the numerical value of the distance itself, is collatable, one could make the point that the relabeling freedom is given by the group of diffeomorphisms that leave the metric invariant up to a conformal factor. Since the present example is, mainly, of a pedagogical interest, we do not explore this further at this point; cf. also Footnote 17. |

19 | We note that it is possible that some sequences $({e}_{1},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{n})$ are not ambiguous, i.e., that $\left[({e}_{1},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{n})\right]=\left\{({e}_{1},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{n})\right\}$. This means that there is one unique sequence of color experiences that has the properties represented by the sequence $({e}_{1},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{n})$ of labels, or put differently, that there is only one possible choice of labels for this sequence that takes into account the collatable relations as described. Sequences of this kind may be used to remove the ambiguity of the labels they contain and make these aspects of experience accessible to a proper scientific analysis. |

20 | The following definitions and their relation to topology are intuitively accessible if one thinks about open balls in a metric space such as ${\mathbb{R}}^{3}$, where ∘ is defined as overlap. We remark, however, that the construction does not give rise to a topology, as claimed in [40], since the third Kuratowski closure axiom (idempotence) does not follow. |

21 | The term “instant of time” may refer to experiential instants of time or to instants of time as used in physics, i.e., points $t\in \mathbb{R}$. |

22 | Here, by “phenomenon”, we mean anything that occurs or manifests itself in a general sense, including both scientifically observable “empirical phenomena” (such as the data of an experiment), as well as what is directly or indirectly perceived (experiences). |

23 | This is not to say, of course, that every phenomenon can be addressed by scientific means. There may be phenomena to which the scientific method cannot be applied. However, it seems that the only way to establish whether this is the case for a particular phenomenon is to try to develop a suitable methodology and, if successful, to apply it. |

24 | We use the word “variable” in a general sense here: a variable may represent something as simple as a natural number just as well as an operator-valued field on some manifold. |

25 | An action is effective (≡ faithful) if and only if no group element other than the identity fixes all elements of $\mathcal{K}$. |

26 | The subset $\mathcal{K}$ will typically be determined by demanding families ${\left({e}_{t}^{1},{e}_{t}^{2},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{t}^{n},{p}_{t}\right)}_{t\in \mathcal{I}}$ to satisfy some mathematical properties, such as regularity, which are necessary for the laws $\mathcal{L}$ of T to be well-defined. To exclude pathological cases, we assume that every label $e\in E$ is contained in at least one family ${({e}_{t}^{1},{e}_{t}^{2},\dots \phantom{\rule{0.166667em}{0ex}},{e}_{t}^{n},{p}_{t})}_{t\in \mathcal{I}}\in \mathcal{K}$. |

27 | Here, $\mathcal{D}$ is the set of solutions of M introduced above. Note that (24) states an identity of sets. It is equivalent to ${\sigma}_{\overline{s}}\left(k\right)\in \mathcal{D}$ for all $k\in \mathcal{D}$. |

28 | |

29 | This problem seems to appear in any grounding that exhibits an explanatory gap. One could try to avoid the problem by interpreting (39) in terms of aspects of experience that do not exhibit an explanatory gap. This, however, would raise the question of why a novel law (the algorithm of IIT) should determine those aspects, as compared to some form of neural processing. |

30 | In [20], the specification furthermore included an integer N that counted perception-decision-action cycles and hence acted as a type of internal “psychological” time, which however we simply replace by the usual parameter $t\in \mathcal{I}$. |

31 | In [20], some general assumptions were made: the spaces W, X, and G were assumed to be measurable spaces, and the maps P, D, and A were chosen to be Markovian kernels, so that for every element of their domain, each map yielded a probability distribution on their co-domain. |

32 | |

33 | |

34 | Here, by ‘constant function’ we simply refer to functions which are formally dependent on b but whose value remains the same independently of which value b takes. E.g., $f(x,y):=x$ is a constant function of y. |

35 | We emphasize again that the notion of ‘functional dependence’ is defined by the respective grounding under consideration. Thus it has a somewhat nomological flavor and does not express, e.g., simple covariation. The fact that both groundings contain notions of functional dependence is what allows the present argument to be stated in a comparably concise form. |

36 | To give one example: As explained in Footnote 35, this argument rests on the notion of functional dependency contained in CG and PG in virtue of psychophysical laws or models of consciousness. In using these, we have avoided the difficult question of what a functional dependency actually expresses (i.e., how it is supposed to be defined and interpreted). E.g., when considering q, c, ${m}_{1}$ and ${m}_{2}$ as variables, which sort of possible words do they describe? Logically possible worlds, conceivable worlds, some sort of nomologically possible worlds? In what way can the assumptions of a grounding restrict these possible worlds and what effect does this have on functional relationships? |

37 | If one assumes that communication between two experiencing subjects is mediated via communication channels that are part of the physical domain (cf. Remark A1), it follows that every scientifically meaningful data needs to be transformed into physical data at some point. |

38 | One may be able to avoid this last conclusion by insisting that the meaning attributed to d by any experiencing subject is dependent on the law E itself and if one furthermore argues that a conclusion about which law E best fits nature can be deduced from the meaning of d, despite d itself being determined independently of the former. At the present stage it seems quite unclear how such an deduction might work, let alone what role an experiment might play in this deduction in the first place. |

39 | Recall that the term function refers to “any causal role in the production of behavior that a system might perform” [31] (p. 6). One could interpret this as referring to “any change in the behavior of a system” (cf. Appendix B.4). This could, in turn, be taken to mean “any change in the dynamical properties of a system”, which would change the meaning of the claim that “physical accounts explain only structure and function” [31] (p. 105f.) to the following: “Any account given in purely physical terms will suffer from the same problem. It will ultimately be given in terms of the structural and dynamical properties of physical processes, and no matter how sophisticated such an account is, it will yield only more structure and dynamics. While this is enough to handle most natural phenomena, the problem of consciousness goes beyond any problem about the explanation of structure and function [sic], so a new sort of explanation is needed.” [30] (p. 121) However, most or even all aspects of consciousness are dynamical in nature, which implies that the set of phenomenal aspects of consciousness (Definition (D1)) is, given this redefinition of the term ‘function’, either empty or trivial. Put differently, with this redefinition the grounding implies that all or almost all of conscious experience can be addressed by an “account given in purely physical terms”. What is left out are only non-dynamical aspects of experience (if there are such aspects at all). |

40 | E.g., [34] holds that “[i]f you wish to include the entire universe in the model, causality disappears because interventions disappear—the manipulator and the manipulated lose their distinction. However, scientists rarely consider the entirety of the universe as an object of investigation. In most cases the scientist carves a piece from the universe and proclaims that piece in—namely, the focus of investigation. The rest of the universe is then considered out or background and is summarized by what we call boundary conditions. This choice of ins and outs creates asymmetry in the way we look at things, and it is this asymmetry that permits us to talk about ‘outside intervention’ and hence about causality and cause-effect directionality” [34] (p. 419f.). “What we conclude (...) is that physicists talk, write, and think one way and formulate physics in another” [34] (p. 407). |

© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kleiner, J. Mathematical Models of Consciousness. *Entropy* **2020**, *22*, 609.
https://doi.org/10.3390/e22060609

**AMA Style**

Kleiner J. Mathematical Models of Consciousness. *Entropy*. 2020; 22(6):609.
https://doi.org/10.3390/e22060609

**Chicago/Turabian Style**

Kleiner, Johannes. 2020. "Mathematical Models of Consciousness" *Entropy* 22, no. 6: 609.
https://doi.org/10.3390/e22060609