entropy-logo

Journal Browser

Journal Browser

Towards a Quantitative Understanding of Agency

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Entropy and Biology".

Deadline for manuscript submissions: closed (20 January 2022) | Viewed by 20908

Special Issue Editors


E-Mail Website
Guest Editor
Araya Inc., Tokyo 107-6024, Japan
Interests: information; consciousness; neuroscience; AI; agency
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Araya Inc., Tokyo 107-6024, Japan
Interests: curved spacetime QFT; topological field theories; stochastic thermodynamics; information geometry; complex systems

E-Mail Website
Guest Editor
Faculty of Medicine, Department of Brain Sciences, Imperial College London, London SW7 2AZ, UK
Interests: information theory; complexity; emergence; computational neuroscience; mental health
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The concept of an agent as an individualized entity that can act and has goals is an abstraction from living organisms that also encompasses robots and virtual creatures. This Special Issue will focus on novel approaches to understand the foundations of agency and individuality at the interface between biology and computer science, including extensions of the following lines of research:

  • Formal classifications of agent–environment interactions as seen from an external observer—including the notions of non-trivial informational closure, autonomy, morphological computation, relevant information, stochastic thermodynamics with feedback, and semantic information.
  • Measures used by Bayesian or reinforcement learning agents to internally guide their behaviour—including intrinsic motivation, active inference, empowerment, and predictive information maximization. 
  • Classification of the dynamics of the physical components of an agent (e.g., Integrated Information Theory).
  • Identification of entities that constitute agents within a larger dynamical system (e.g., informational individuality and complete local integration).

Additionally, special consideration will be given to (i) efforts trying to elucidate the relationships between these various perspectives, (ii) attempts trying to formalize the role of internal modeling and “imagination,” and (iii) extensions related to novel quantitative tools such as Partial Information Decomposition or causal emergence.

Dr. Ryota Kanai
Dr. Pablo A. Morales
Dr. Fernando E. Rosas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • individuality and boundaries
  • goal directedness
  • perception–action loops
  • causal emergence
  • Markov blankets
  • intrinsic motivation
  • empowerment
  • autopoiesis
  • non-equilibrium thermodynamics
  • Bayesian/free energy agents
  • non-trivial information closure
  • integrated information theory
  • partial information decomposition

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 2272 KiB  
Article
Metacognition as a Consequence of Competing Evolutionary Time Scales
by Franz Kuchling, Chris Fields and Michael Levin
Entropy 2022, 24(5), 601; https://doi.org/10.3390/e24050601 - 26 Apr 2022
Cited by 12 | Viewed by 6626
Abstract
Evolution is full of coevolving systems characterized by complex spatio-temporal interactions that lead to intertwined processes of adaptation. Yet, how adaptation across multiple levels of temporal scales and biological complexity is achieved remains unclear. Here, we formalize how evolutionary multi-scale processing underlying adaptation [...] Read more.
Evolution is full of coevolving systems characterized by complex spatio-temporal interactions that lead to intertwined processes of adaptation. Yet, how adaptation across multiple levels of temporal scales and biological complexity is achieved remains unclear. Here, we formalize how evolutionary multi-scale processing underlying adaptation constitutes a form of metacognition flowing from definitions of metaprocessing in machine learning. We show (1) how the evolution of metacognitive systems can be expected when fitness landscapes vary on multiple time scales, and (2) how multiple time scales emerge during coevolutionary processes of sufficiently complex interactions. After defining a metaprocessor as a regulator with local memory, we prove that metacognition is more energetically efficient than purely object-level cognition when selection operates at multiple timescales in evolution. Furthermore, we show that existing modeling approaches to coadaptation and coevolution—here active inference networks, predator–prey interactions, coupled genetic algorithms, and generative adversarial networks—lead to multiple emergent timescales underlying forms of metacognition. Lastly, we show how coarse-grained structures emerge naturally in any resource-limited system, providing sufficient evidence for metacognitive systems to be a prevalent and vital component of (co-)evolution. Therefore, multi-scale processing is a necessary requirement for many evolutionary scenarios, leading to de facto metacognitive evolutionary outcomes. Full article
(This article belongs to the Special Issue Towards a Quantitative Understanding of Agency)
Show Figures

Figure 1

18 pages, 590 KiB  
Article
Naturalising Agent Causation
by Henry D. Potter and Kevin J. Mitchell
Entropy 2022, 24(4), 472; https://doi.org/10.3390/e24040472 - 28 Mar 2022
Cited by 8 | Viewed by 6963
Abstract
The idea of agent causation—that a system such as a living organism can be a cause of things in the world—is often seen as mysterious and deemed to be at odds with the physicalist thesis that is now commonly embraced in science and [...] Read more.
The idea of agent causation—that a system such as a living organism can be a cause of things in the world—is often seen as mysterious and deemed to be at odds with the physicalist thesis that is now commonly embraced in science and philosophy. Instead, the causal power of organisms is attributed to mechanistic components within the system or derived from the causal activity at the lowest level of physical description. In either case, the ‘agent’ itself (i.e., the system as a whole) is left out of the picture entirely, and agent causation is explained away. We argue that this is not the right way to think about causation in biology or in systems more generally. We present a framework of eight criteria that we argue, collectively, describe a system that overcomes the challenges concerning agent causality in an entirely naturalistic and non-mysterious way. They are: (1) thermodynamic autonomy, (2) persistence, (3) endogenous activity, (4) holistic integration, (5) low-level indeterminacy, (6) multiple realisability, (7) historicity, (8) agent-level normativity. Each criterion is taken to be dimensional rather than categorical, and thus we conclude with a short discussion on how researchers working on quantifying agency may use this multidimensional framework to situate and guide their research. Full article
(This article belongs to the Special Issue Towards a Quantitative Understanding of Agency)
25 pages, 28236 KiB  
Article
Quantifying Reinforcement-Learning Agent’s Autonomy, Reliance on Memory and Internalisation of the Environment
by Anti Ingel, Abdullah Makkeh, Oriol Corcoll and Raul Vicente
Entropy 2022, 24(3), 401; https://doi.org/10.3390/e24030401 - 13 Mar 2022
Cited by 1 | Viewed by 1895
Abstract
Intuitively, the level of autonomy of an agent is related to the degree to which the agent’s goals and behaviour are decoupled from the immediate control by the environment. Here, we capitalise on a recent information-theoretic formulation of autonomy and introduce an algorithm [...] Read more.
Intuitively, the level of autonomy of an agent is related to the degree to which the agent’s goals and behaviour are decoupled from the immediate control by the environment. Here, we capitalise on a recent information-theoretic formulation of autonomy and introduce an algorithm for calculating autonomy in a limiting process of time step approaching infinity. We tackle the question of how the autonomy level of an agent changes during training. In particular, in this work, we use the partial information decomposition (PID) framework to monitor the levels of autonomy and environment internalisation of reinforcement-learning (RL) agents. We performed experiments on two environments: a grid world, in which the agent has to collect food, and a repeating-pattern environment, in which the agent has to learn to imitate a sequence of actions by memorising the sequence. PID also allows us to answer how much the agent relies on its internal memory (versus how much it relies on the observations) when transitioning to its next internal state. The experiments show that specific terms of PID strongly correlate with the obtained reward and with the agent’s behaviour against perturbations in the observations. Full article
(This article belongs to the Special Issue Towards a Quantitative Understanding of Agency)
Show Figures

Figure 1

23 pages, 2579 KiB  
Article
Quantifying the Autonomy of Structurally Diverse Automata: A Comparison of Candidate Measures
by Larissa Albantakis
Entropy 2021, 23(11), 1415; https://doi.org/10.3390/e23111415 - 28 Oct 2021
Cited by 2 | Viewed by 3276
Abstract
Should the internal structure of a system matter when it comes to autonomy? While there is still no consensus on a rigorous, quantifiable definition of autonomy, multiple candidate measures and related quantities have been proposed across various disciplines, including graph-theory, information-theory, and complex [...] Read more.
Should the internal structure of a system matter when it comes to autonomy? While there is still no consensus on a rigorous, quantifiable definition of autonomy, multiple candidate measures and related quantities have been proposed across various disciplines, including graph-theory, information-theory, and complex system science. Here, I review and compare a range of measures related to autonomy and intelligent behavior. To that end, I analyzed the structural, information-theoretical, causal, and dynamical properties of simple artificial agents evolved to solve a spatial navigation task, with or without a need for associative memory. By contrast to standard artificial neural networks with fixed architectures and node functions, here, independent evolution simulations produced successful agents with diverse neural architectures and functions. This makes it possible to distinguish quantities that characterize task demands and input-output behavior, from those that capture intrinsic differences between substrates, which may help to determine more stringent requisites for autonomous behavior and the means to measure it. Full article
(This article belongs to the Special Issue Towards a Quantitative Understanding of Agency)
Show Figures

Figure 1

Back to TopTop