Next Article in Journal
K-th Nearest Neighbor (KNN) Entropy Estimates of Complexity and Integration from Ongoing and Stimulus-Evoked Electroencephalographic (EEG) Recordings of the Human Brain
Previous Article in Journal
Improving the Robustness of Entangled States by Basis Transformation
Open AccessArticle

From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation

Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK
Entropy 2019, 21(1), 60; https://doi.org/10.3390/e21010060
Received: 12 November 2018 / Revised: 4 January 2019 / Accepted: 4 January 2019 / Published: 13 January 2019
(This article belongs to the Section Complexity)
Over recent decades several mathematical theories of consciousness have been put forward including Karl Friston’s Free Energy Principle and Giulio Tononi’s Integrated Information Theory. In this article we further investigate theory based on Expected Float Entropy (EFE) minimisation which has been around since 2012. EFE involves a version of Shannon Entropy parameterised by relationships. It turns out that, for systems with bias due to learning, certain choices for the relationship parameters are isolated since giving much lower EFE values than others and, hence, the system defines relationships. It is proposed that, in the context of all these relationships, a brain state acquires meaning in the form of the relational content of the associated experience. EFE minimisation is itself an association learning process and its effectiveness as such is tested in this article. The theory and results are consistent with the proposition of there being a close connection between association learning processes and the emergence of consciousness. Such a theory may explain how the brain defines the content of consciousness up to relationship isomorphism. View Full-Text
Keywords: float entropy; consciousness and relationships; typical data; structures implied by neural networks float entropy; consciousness and relationships; typical data; structures implied by neural networks
Show Figures

Figure 1

MDPI and ACS Style

Mason, J.W.D. From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy 2019, 21, 60. https://doi.org/10.3390/e21010060

AMA Style

Mason JWD. From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy. 2019; 21(1):60. https://doi.org/10.3390/e21010060

Chicago/Turabian Style

Mason, Jonathan W.D. 2019. "From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation" Entropy 21, no. 1: 60. https://doi.org/10.3390/e21010060

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop