Next Article in Journal
The Game Is Not over Yet—Go in the Post-AlphaGo Era
Next Article in Special Issue
EvoDevo: Past and Future of Continuum and Process Plant Morphology
Previous Article in Journal / Special Issue
EvoDevo: An Ongoing Revolution?
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

How Do Living Systems Create Meaning?

Independent Researcher, 11160 Caunes Minervois, France
Allen Discovery Center at Tufts University, Medford, MA 02155, USA
Author to whom correspondence should be addressed.
Philosophies 2020, 5(4), 36;
Submission received: 26 September 2020 / Revised: 6 November 2020 / Accepted: 6 November 2020 / Published: 11 November 2020
(This article belongs to the Special Issue Renegotiating Disciplinary Fields in the Life Sciences)


Meaning has traditionally been regarded as a problem for philosophers and psychologists. Advances in cognitive science since the early 1960s, however, broadened discussions of meaning, or more technically, the semantics of perceptions, representations, and/or actions, into biology and computer science. Here, we review the notion of “meaning” as it applies to living systems, and argue that the question of how living systems create meaning unifies the biological and cognitive sciences across both organizational and temporal scales.

1. Introduction

The “problem of meaning” in biology can be traced to the Cartesian doctrine that animals are automata, and that humans are automata (bodies) controlled by spirits (minds). As Descartes required minds to generate meaning, meaning under this doctrine is both a strictly human and a strictly psychological phenomenon. The general turn from metaphysics toward language in mid-20th-century philosophy further reinforced this Cartesian division by localizing the study of meaning within the study of language, understood as human natural language characterized by a recursive grammar, and arbitrarily-large lexicon, and an associated collection of interpretative practices. Human cognition was widely assumed within the representationalist cognitive science that largely replaced behaviorism from the 1960s onward to replicate this tripartite structure of public natural language, either because it was implemented in an underlying “language of thought” or “mentalese” having a similarly expressive syntax, semantics, and pragmatics [1], or because it was implemented by natural language itself [2]. Cognitive abilities considered to be uniquely human, including symbolic “Process-2” cognition [3], analogical reasoning [4], and “mental time travel” to the past via episodic memory or the future via prospective memory [5], in particular, were all considered to be functionally dependent upon language.
During this same time, however, studies in comparative psychology demonstrated robust communication abilities, and related abilities including tool use and cultural transmission of knowledge, in animals that appear, on all assays thus far, to lack a recursive syntax [6]. Superficially similar communication abilities have since been demonstrated in microbes [7] and plants [8]. If “meaning” is restricted to the full combination of syntactic, semantic, and pragmatic aspects of meaning characteristic of human languages, the communications sent and received by these nonhuman organisms must be regarded as devoid of meaning, as Descartes presumably would have regarded them, and as communications sent and received by artificial intelligence (AI) systems are regarded by many today [9,10,11,12]. Hauser, Chomsky, and Fitch [6] suggest, on the contrary, that the faculty of language can be construed more broadly, and that both sensory–motor and conceptual–intentional aspects of “meaning” can be dissociated from the recursive syntax with which they are coupled in human languages. This dissociation allows nonhuman communication abilities to be viewed as meaningful even though not “grammatical” in the sense of having a compositional semantics enabled by a recursive syntax [13,14,15]. It is this broader, non-Cartesian conception of meaning with which we will be concerned in what follows.
Opposition to Cartesian assumptions and to human-like, grammatical language as paradigmatic in cognitive science has coalesced over the past three decades into the embodied-embedded-enactive-extended cognition (4E; see [16] for an overview) and biosemiotic (see [17] for an overview) movements. Both take considerable inspiration from Maturana and Varela’s theories of autopoesis [18] and embodiment [19], and both are broadly consistent with cognition being a fundamental characteristic of living systems [14,15,20,21,22,23,24]. There is, however, considerable variation among non-Cartesian conceptions of meaning. Following Gibson [25], ecological realists locate meanings in the organism- or species-specific environment, where they have an effectively nomological function [26,27]. “Radical” embodied cognition locates meanings in the structure and dynamic capabilities of the body [28], while enactive cognition locates meanings in the organism’s inclination and ability to act on the environment [29]; in both cases, meanings characterize structural and functional, but in most cases explicitly non-representational, capacities of an embodied system (see also [30,31] for reviews of these approaches from an AI/robotics perspective, see [32] for a dynamical-systems approach, and see [33] for a 4E approach that can be interpreted representationally). Biosemiotic approaches are, in contrast, concerned with physically implemented representations (“signs”) that carrying meaning at one or more scales of biological organization.
Despite their differences, both Cartesian and non-Cartesian approaches to cognition generally localize meaning to individual organisms. Even culturally shared meanings, e.g., the particular syntactic markers, word meanings, and pragmatics shared by competant users of some particular human natural language, are assumed to be individually comprehended by every member of the language-using community. Such meanings are generally acknowledged to have a developmental history, typically involving learning in a community context, but outside of evolutionary psychology, they are seldom considered to have significant evolutionary histories. This focus on the individual and on learning is understandable from the perspective of (narrowly-construed) language and its pervasive influence on 20th-century thought. While the question of how the human language system, specifically its syntactic components [34], evolved is of theoretical interest, it is hard to imagine how individual, conventionalized word meanings, e.g., “<cat> means cat” could have significant evolutionary histories.
In contrast to this focus on individuals and learning, we advance in this paper a deeply evolutionary approach to meanings and suggest, consonant with the theme of this Special Issue, that the construction of meanings presents a common and pressing question for all disciplines in the Life Sciences. We focus, in particular, on three questions that are foundational to the study of meaning:
  • How do living systems distinguish between components of their environments, considering some to be “objects” worthy of attention and others to be “background” that is safely ignored?
  • How do living systems switch their attentional focus from one object to another?
  • How do living systems create and maintain memories of past events, including past perceptions and actions?
These questions all presuppose an answer to a fourth question:
How do living systems reference their perceptions, actions, and memories to themselves?
We use the term “living system” instead of “organism” in formulating these questions to emphasize their generality: we ask these questions of living systems at all scales, from signal transduction pathways within individual cells to communities, ecosystems, and extended evolutionary lineages. In addressing these questions, we build on our previous work on memory in biological systems [35] and on evolution and development as informational processes operating on multiple scales [36,37].
In what follows, we address these questions in turn, reviewing in each case both theoretical considerations and empirical results for systems at organizational scales ranging from molecular interaction networks to the evolutionary history of the biosphere as a whole, and time scales ranging from the picosecond (ps) scale of molecular conformational change to the billion-year (1000 MY) scale of planetary-scale evolutionary processes. To render these questions precise, we borrow from physics the idea of a reference frame (RF), a standard or coordinate system that assigns units of measurement to observations, and thereby renders them comparable with other observations. Clocks, meter sticks, and standardized masses are canonical physical RFs [38]. We show in Section 2 that organisms must implement internal RFs to enable comparability between observations, and make explicit the role of RFs in predictive-coding models of cognition [39,40,41,42,43,44,45]. The three sections that follow address questions (1)–(3) above, beginning in Section 3 with the function of RFs in object segregation, categorization, and identification. These mechanisms, which are best characterized in human neurocognitive networks, reveal how living systems determine what is potentially significant in their environments, and allow a basic characterization of the limitations on what living systems are capable of considering significant. We then consider in Section 4 the question of attention, a phenomenon also best characterized in humans. We suggest that the fundamental dichotomy between proactive and reactive attention systems found in mammals [46,47] can be extended to all scales, a suggestion consistent with Friston’s characterization of proactive and reactive modes in active inference systems [41,42]. The question of memory is addressed in Section 5, where we examine the neurofunctional concept of an engram [48,49] and how it relates to biological memories at other scales [35]. Specifically, we ask whether the phenomenon of memory change during reconsolidation after reactivation [50,51] characterizes biological memories in general. We suggest that, while memories may be stored either internally or externally, the mechanisms and consequences of access are the same. The distinction between “individual” and “public” memories is, therefore, unsustainable. Object identification, attention, and memory are brought together in Section 6 in the construction of a self-representation to which meanings are associated. We explicitly consider the self-representations implemented by humans [52,53,54,55], and suggest that locating the self-representations operative in other organisms and in sub- or supra-organismal systems represents a key challenge to the Life Sciences [24]. We integrate these themes from an evolutionary perspective in Section 7, suggesting that meaning is itself a multi-scale phenomenon that characterizes all living systems, from molecular processes to Life on Earth as a whole. The fundamental goal of the Life Sciences is, from this perspective, to understand how living systems create meaning.

2. Meanings Require Reference Frames

Bateson famously defined a “unit” of information as a “difference which makes a difference” ([56] p. 460). As Roederer points out, information so defined is actionable or pragmatic; it “makes a difference” for what an organism can do [57]. It is, therefore, information that is meaningful to the organism in a context that requires or affords an action, consistent with sensory-motor meaning being the most fundamental component of language as broadly construed [6]. It is in this fundamental sense that meaning is “enactive” [19].
A “difference which makes a difference” must, clearly, be recognized as a difference, i.e., as being different from something else. The “something else” that allows differences to be recognized is an RF. Choosing an RF is choosing both a kind of difference to be recognized, e.g., a difference in size, shape, color, or motion, and a specific reference value, e.g., this big or that shape, that the difference is a difference from. Any discussion of differences assumes one or more RFs. Our goal in this section is make the nature of these RFs fully explicit. With an understanding of how specific kinds of RFs enable specific kinds of meanings, we can approach questions about the evolution, development, and differentiation of meanings as questions about the evolution, development, and differentiation of RFs.

2.1. System—Environment Interaction as Information Exchange

While definitions of “life” and “living system” are numerous, varied, and controversial [58,59,60,61,62], all agree that every living system exists in interaction with an environment. If Life on Earth as a whole is considered a living system [35,37,63,64], its environment is by definition abiotic; for all other living systems (on Earth), the environment has both abiotic and living components. Definitions of life agree, moreover, that at any given instant t, the state | S of a living system S is distinct, and distinguishable, from the state | E of its environment E (we borrow Dirac’s | · notation for states from quantum theory: | X is the state of some system X). This condition of state distinguishability, called “separability” in physics, guarantees that the interaction between a living system and its environment can be viewed, without loss of generality, as an exchange of classical information [65,66,67]. This exchange is symmetric at every instant t (technically, within every time interval Δ t small enough that neither system nor environment significantly changes its size or composition during Δ t ): every bit (binary unit) of information that the living system obtains from its environment at t is balanced by a bit transferred by the living system to its environment at t. We are interested in interactions that provide S with actionable information about E; such interactions are thermodynamically irreversible and, as such, have a minimum energetic cost of ln2 k B T , k B Boltzmann’s constant and T temperature [68,69]. In this case, incoming bits can be considered “observations” and outgoing bits can be considered “actions”, bearing in mind that in this classical, thermodynamic sense of information, obtaining free energy from the environment is “observing” it and radiating waste heat into the environment is “acting” on it. As the value of k B in macroscopic units is small ( 1.38 × 10 23 Joules/Kelvin), bit-sequence exchange can appear continuous when temperatures are high and time is coarse-grained, as is typically the case in biological assays.
Under these conditions, we can ask how much a living system S can learn, i.e., what information S can obtain by observation, about the internal structure or dynamics of its environment E, and, conversely, how much E can learn about the internal structure or dynamics of S. The answer is that what can be learned at t is strictly limited to the classical information actually exchanged at t. This information is, for any pair of finite, separable physical systems, regardless of their size or complexity, strictly insufficient to fully determine the internal structure or dynamics of either interaction partner [70,71]; see [72,73] for informal discussions of this point. The information obtainable by either party by observation is, therefore, conditionally independent of the internal structure or dynamics of the other party; these can, in principle, vary arbitrarily without affecting the observations obtained at any given t. The interaction between any S and its E thus satisfies the Markov blanket (MB) condition: the information exchanged at t is the information encoded by an identifiable set of “boundary” states that separate the internal states of S from the internal states of E [33,41,42,74,75].
The idea that the observational outcomes obtainable by any observer are conditionally independent of the internal structure and dynamics of the observed environment has, of course, a long philosophical history, dating at least from Plato’s allegory of the cave and forming the basis of Empiricist philosophy since Hume [76]. This idea challenges any objective ontology; pairwise interactions between separable systems—between any S and its E—are provably independent of further decompositions of either system in both quantum and classical physics [72]. As Pattee [77] puts it, the “cuts” that separate the observed world of any system into “objects” are purely epistemic and hence relative to the system making the observations. Understanding what “objects” S “sees” as components of its E thus requires examining the internal dynamics of S. These internal dynamics, together with the system—environment interaction, completely determine what environmental “objects” S is capable of segregating from the “background” of E and identifying as potentially meaningful. Whether it is useful to S to segregate “objects” from “background” in this way is determined not by the internal dynamics of S, but by those of E. Meaning is thus a game with two players, not just one. It is in this sense that it is fundamentally “embedded” (again see [19]). In the language of evolutionary theory, it is always E that selects the meanings, or the actions they enable that have utility in fact for S, and culls those that do not.

2.2. Meaning for Escherichia coli: Chemotaxis

What does a living system consider meaningful? Before turning to any division of the environment into objects and background, let us consider meanings assigned to the environment as a whole.
Bacterial chemotaxis has long served as a canonical example of approach/avoidance behavior and hence of the assignment of valence to environmental stimuli that “make a difference” to the bacterium (see [14,78] for recent reviews). Chemotaxis receptors respond to free ligand and have sufficient on-receptor short-term memory to determine local ligand gradients (see [79] for examples of cells that locally amplify gradients). The fraction of receptors bound at t indicates an environmental state, either | g o o d or | b a d , at t and directly drives either “approach” or “avoid” motility. No division of E into “objects” is either possible or necessary in this system.
While it does not require object segregation, the assignment of valence to | E during bacterial chemotaxis does require an RF to distinguish between | g o o d and | b a d . In E. coli, this RF is defined by the default phosphorylation state R 0 of the messenger protein CheY, i.e., by the default value of the concentration ratio R = [CheY-P]/[CheY] [78]. Values of R greater or less than R 0 encode the “pointer states” (again borrowing terminology from physics) | R = | g o o d or | R = | b a d and induce approach or avoidance, respectively. The value R 0 is “set” by the balance between kinases and phosphatases acting on CheY (with CheY acetylation as a secondary modulator; see [80] for discussion). This balance is, in turn, dependent on the overall state of the E. coli gene regulation network (GRN) as discussed further in Section 6 below.
As noted earlier, the concept of an RF was originally developed within physics to formalize the pragmatic use of clocks, measuring sticks, and other tools employed to make measurements [38]. In E. coli, however, we see an internal state being employed as an RF. This situation is completely general: any use of an external system as an RF presupposes the existence of an internal state that functions as an RF [81]. An external clock, for example, can only register the passage of time for an observer with an internal RF that enables comparing the currently-observed state of the clock to a remembered past state. The sections that follow will illustrate this for progressively more complex RFs. A formal construction of internal RFs given a quantum-theoretic description of the system-environment interaction is provided in [66].

2.3. Implementing RFs Requires Energy

The CheY phosophorylation system in E. coli effectively encodes one bit of information: the pointer state | R can be either | g o o d or | b a d . Encoding this bit requires energy: at least ln2 k B T as discussed above. The energy needed is in fact much larger than this, as the protein CheY itself, the kinases and phosphatases that act on it, and the receptors that provide input to the CheY phosphorylation/dephosphorylation cycle must all be synthesized and eventually degraded, and the intracellular environment that supports these interactions must be maintained. All of these processes are thermodynamically irreversible. The energy to drive these processes must be supplied by E. coli’s metabolic system. It derives, ultimately, from the external environment. The external environment also absorbs, again irreversibly, the waste heat that these biochemical processes generate.
The energetics of the CheY system illustrate a general point: implementing internal RFs requires energetic input from the environment. This energetic input is necessarily larger than the energy required to change the pointer state associated with the RF [66]. Any RF is, therefore, a dissipative system that consumes environmental free energy and exhausts waste heat back to the environment. Every RF an organism implements requires dedicated metabolic resources. In mammals, for example, the total energy consumption of the brain scales roughly linearly with the number of neurons, indicating an approximately constant per-neuron energy budget [82]. Energy usage is highest in cortical neurons, roughly 10 11 k B T per neuron per second. Using rough estimates of 10,000 synapses per neuron receiving data at 10 Hz, on the order of 10 6 k B T , spread out over multiple maintenance, support, and internal computational processes, is required to process one synaptically-transferred bit.

2.4. RFs Are State-Space Attractors

Following a perturbation, the E. coli phosphorylation ratio R returns to its default value R 0 . Stability against perturbations within some suitable dynamic range is clearly required for any state to serve as an RF. Hence, RFs are state-space attractors. Its dimensionality as an attractor defines the dimensionality of the “difference” the RF makes in the behavior of the system’s state vector. An RF is, on this definition, a point or collection of points (e.g., a limit cycle) in the state space that separates state-vector components associated with different behaviors. In the case of | R , the dimensionality is one, corresponding to 1 bit of information: the value of R, either greater or less than R 0 , determines whether the flagellum spins clockwise (approach) or counter-clockwise (avoid).

2.5. RFs Set Bayesian Expectations

As Friston and colleagues have shown [41,42,74,75], the behavior of even unicellular organisms can generically be described in Bayesian terms. In this language, an RF such as the default value R 0 of [CheY-P]/[CheY] defines an expectation value, i.e., a “prior probability” for the sensed state of a valence-neutral environment. Avoidant chemotaxis in E. coli is an instance of active inference that restores a default expectation; approach chemotaxis is expectation-dependent goal pursuit. Short-term memory for environmental state is implemented by receptor methylation [78], while longer-term revision of expectations is implemented by modulating R 0 ; we discuss the roles of RFs as memories in active inference systems more generally in Section 5 below.

2.6. Only Meaningful Differences Are Detectable

When interaction is viewed as information exchange and meaningful differences are viewed as physically encoded by RFs, two aspects of meaning in biological systems become clear:
  • Detecting differences with respect to internal RFs is energetically expensive.
  • A difference is detected when at least one bit flips—at least one component of a pointer state changes.
Any detectable difference, therefore, makes a difference, both to the value of an “informative” pointer state and to the energy budget of the organism doing the detecting. Employing Bateson’s criterion, all detectable differences are meaningful. Organisms do not waste energy acquiring information that is not actionable. This is not a choice, or a matter of optimization: acquiring information without changing state, and thereby acting on the environment, is physically impossible.

2.7. The Evolution of Meaning Is the Evolution of RFs

Consistent with its primarily biochemical, as opposed to mechanical or manipulative, mode of operating within its world [83], E. coli encodes RFs and associated pointer states, such as R 0 and | R that can be interpreted as tastes. While E. coli is capable of locomotion using its flagellum, chemotaxis is opportunistic. The tumbling “avoid” motion due to counterclockwise flagellar rotation is directionally random; E. coli does not appear to encode any stable spatial RF. Pilus extension during “mate seeking” in E. coli similarly appears directionally random, and is limited in distance by the pilus polymerization mechanism [84]. The distal tip of the pilus carries an adhesin, but it is unknown whether it is specifically recognized by the recipient cell; hence it is unknown whether E. coli encodes an RF for “mate” or “conspecific”. The E. coli cell cycle is opportunistic, and it has no circadian clock (although it is metabolically capable of supporting a heterologous clock [85]); hence, E. coli appears to have no endogenous time RF. The world of “taste” may, for E. coli, be the whole world.
How did evolution get from systems like E. coli that are restricted to tasting an environment that they may move in, but do not otherwise detect or represent, to systems like humans equipped with RFs for three-dimensional space and linear time, as well as RFs capable of identifying thousands of individual objects and tracking their state changes along multiple dimensions? This is the fundamental question for an evolutionary theory of cognition, one that goes far beyond what is standardly called “evolutionary psychology” [86,87]; see [88] for one way of asking this broader question. Addressing this question, we propose, requires understanding the evolution of RFs. As with other evolutionary questions, the available empirical resources are the implementations of RFs across a phylogenetically broad sample of extant organisms and the developmental and/or regenerative processes that construct these RFs.
Consistent with their role in detecting and interpreting states of the external environment, and as discussed in Section 6 below, the internal environment as well, one can expect cellular-scale RFs to be implemented by signal transduction pathways. One approach to the question of RF evolution is, therefore, to understand the evolution of signal transduction pathways (e.g., two-component pathways in microbes [89], or the Wnt [90] or MAPK [91] pathways in eukaryotes; see [92] for a general review). Understanding signal transduction in a “bag of molecules” is, however, not enough: the evolution of the organizing framework provided by the cytoplasm, cytoskeleton, and membrane—what we have previously termed the “architectome” [35]—also contributes to the evolution of RFs, particularly spatial RFs as discussed in Section 3 below. As systems become multicellular, we can expect RFs to be implemented by cell–cell communication systems including paracrine [93,94], endocrine [95,96], non-neural bioelectric [97,98,99], and neural [100,101,102] systems. Hence, the question of RF evolution incorporates the questions of morphological and physiological evolution, and, at larger scales, the evolution of symbiotic networks (including holobionts [103,104]), communities, and ecosystems [36,37]. At every level, RFs specify actionability and therefore meaning.

3. How Are Objects Segregated and Identified?

3.1. From Multi-Component States to Objects

While the chemotactic pathway in E. coli shows no evidence of specific object recognition as opposed to environmental state recognition, microbial predators such as Myxococcus xanthus appear capable of recognizing and differentially responding to both kin and non-kin conspecifics as well as a wide range of bacterial and fungal prey species (see [7,105] for recent reviews). While the RFs that distinguish cells to be killed (non-kin conspecifics or prey) from cells to be cooperated with (kin conspecifics) have not been fully characterized, the general structure of microbial sensor and response systems [14] suggest that default phosphorylation ratios analogous to R 0 are likely candidates.
Does M. xanthus represent conspecifics or prey as separate objects, or are these just different, possibly localized, “features” of its environment? When a traveling paramecium encounters a hard barrier, backs up, and goes around it, does it represent the barrier as an object, or just a “hard part” of the environment? When an amoeba engulfs a bacterium, is the bacterium an object, or just “some of” the environment that happens to be tasty? These questions cannot currently be answered, and the first organism (more properly, lineage) capable of telling an “object” from a feature of its environment remains unknown. From an intuitive perspective, an ability to distinguish “objects” from the environment in which they are embedded may come with the ability to specifically manipulate them. The sophisticated, stigmergic wayfinding signals used by social insects may be processed as environmental features, but eggs, prey items, agricultural forage, building materials, enemies, and even the dead bodies of colony-mates are manipulated in specific, stereotypical ways [106,107] and may be represented as objects. This association between manipulability and objecthood appears to hold for human infants [108,109]; that even artificial constructs such as road networks are intuitively considered “features” suggests that it holds for adult humans as well. Concluding that this association holds only for relatively advanced multicellular organisms may, however, not be justified; cells migrating within a multicellular body leave stigmergic messages for later-migrating cells to read [110,111] and cells of many types inspect their neighbors individually, killing those that fail to meet fitness criteria [112,113].

3.2. Objects as Reference Frames

Unlike M. xanthus, which has separate, functionally dissociable detection systems and hence separate RFs for “prey” and “edible prey components”, cephalopods, birds, and mammals are capable of both categorizing objects and identifying objects as persistent individuals across changes in their states. This ability allows distinct meanings to be assigned to the same object at the category, individual, and state levels of specificity. The state-independent components of an object appear, in this case, to serve collectively as an RF for a categorized individual during a perceptual encounter; the state-independent components of a persistent object similarly appear to serve as an RF for individual re-identification. If this is the case, the state-independent meaning of a categorized individual can be identified with the set of actions, including inferences, that it affords as a recognizable object distinct from the “rest of” the environment.
Object recognition has been studied most intensively in mammals, particularly humans, and is best understood in the visual modality (see [114,115,116,117,118] for general reviews, see [119] for a comparison of haptic and visual modalities, and see [120] for pointers to work in other mammals and birds). Object category recognition (“categorization”), e.g., recognizing a table as a table, as opposed to a specific, individual table encountered previously, is primarily feature-based, with “entry-level” categories corresponding to early-learned common nouns, e.g., “table” or “person” learned first and processed fastest [117]. Identifying specific individual objects, e.g., specific individual people, is also feature based, but requires the use of context and causal-history information in addition to features if similarly-featured competitors or opportunities for significant feature change are present [121,122]. The object recognition process generates an “object token” [115], an excitation pattern in medial temporal cortex that is maintained over the course of a perceptual encounter (i.e., 10 s to 100 s of seconds) and that can be, but is not necessarily, encoded more persistently as a component (an engram [48,49] or invariant [123]) of an episodic memory. Identifying a categorized object as a specific, known individual requires linking its current object token to at least one episodic-memory encoded object token.
An object that never changes state and hence affords no actions is not worth recognizing (highly symmetric objects are an exception, e.g., [124]). An object that does change state—even if it just changes its position in the body-centered coordinates (see below) of a mobile observer—must have features that do not change to enable recognition. The collection of unchanging features serve as a reference against which variation in the changing feature—the pointer state of interest—can be tracked. They therefore constitute an RF for the object, at least during that perceptual encounter [66,81]. Such RFs may be extraordinarily rudimentary, and hence exploitable by competitors, predators, or potential prey through the use of mimicry, camouflage, lures, or other tactics. How humans or other animals construct stable object tokens out of sequences of encounter-specific RFs is not known, and constitutes a major open problem in cognitive and developmental psychology and in developmental and applied robotics [118]. Identifying individual objects as such across significant gaps in observation during which previously-stable features may have changed requires solving the Frame problem [125]. This problem is formally undecidable [126]; hence, only heuristic solutions are available. What heuristics are actually used by humans and how they are implemented remain unknown.
Consider again the use of an object such as a clock as an external RF. Not only does this require an internal time RF, it requires an RF to identify the clock as such, and indeed as the same clock that was consulted earlier. The clock can only be employed as a shared external RF by agents that share these internals RFs as well [81]. Apart from speculations about consciousness (e.g., [127,128,129]), physicists have historically been reluctant to comment on the internal structures of observers [130]. By clarifying the role of RFs in identifying objects and making measurements, biology and cognitive science may make significant contributions to physics.

3.3. Embedding Objects in Space

Representing either environmental features or objects as having distinct locations requires an RF for space. All cells, and all multicellular organisms, come equipped with a two-dimensional surface on which spatial layout can be defined: the cell membrane(s) that face the external environment. Locations on this surface are stabilized by the cytoskeleton at the cellular scale, and by cell–cell interactions and macroscopic architectural components such as skeletons at the multicellular scale. Physical, biochemical, and/or bioelectric asymmetries impose body axes on these surfaces [35,99,131]. Even E. coli has a body axis, with chemoreceptors on one end and flagella on the other.
Asymmetric placement of receptors and effectors on the surface facing the environment effectively embeds them in a two-dimensional body-centered coordinate system. Localized processing of information binds it to this coordinate system, allowing downstream systems to know “where” a signal is coming from [132]. The extent to which cytosolic signal transduction is spatially organized, e.g., by the cytoskeleton, at the cellular level remains largely unknown except in the case of neurons, where specific functions including exclusive-or [133] have been mapped to specific parts of the cell. In the neural metazoa (Ctenophores, Cnidarians and Bilaterians), the spatial organization of signal transduction is largely taken over by neurons. The ability of neurons to provide high-resolution point-to-point communication allows the internal replication of body-surface coordinate systems, e.g., in the somatosensory homunculus [134] or retinotopic visual maps [135]. Such replication is essential to fine motor control.
Locomotion and manipulation by appendages appear necessary for adding a third, radial dimension to body-centered coordinates. Stigmergic coordinate systems, for example, are meaningless without locomotor ability. Building a representation of space requires exploring space, even peripersonally. The construction of the mammalian hippocampal space representation from records of paths actually traversed reflects this requirement [136]. How such path-based spatial representations are integrated with stigmergic, celestial, or magnetic-compass based coordinate systems, and whether such integration is required to enable, e.g., long-distance migration, remain largely unexplored.
“Placing” objects in space ties them to the body-centered spatial RF, and enables them to function as external RFs for the body. This ability appears evident in arthropods, cephalopods, and vertebrates, and may be more widespread, e.g., in plants [8]. Humans are capable of radically revising such RFs as new entities are characterized as objects; for example, the RF defined by cosmological space has been revised multiple times from the European Medieval era to the present [137].

3.4. Embedding Objects in Time

At the cellular level, the primary time RFs are cyclic: the cell cycle and the circadian clock [138,139,140]. Embedding events or objects in longer spans of time requires a linear time RF, i.e., an ordered event memory with sufficient capacity. Evidence of cellular memory [141] or cellular-scale learning (see Section 5) across cell-cycle or diurnal boundaries is insufficient, on its own, to the establish the ordering capability required for an internal linear time RF. Cells can use ordered external events to control sequential behavior over longer times, as in migration and fruiting-body development in slime molds, but this is also insufficient to indicate an internal linear time RF.
By “internalizing” events that are external to the individual cells involved, multicellular systems can implement linear time RFs, e.g., the linear time RF implemented by a tissue- or organism-scale developmental process. Interestingly, highly-structured anatomies and morphologies, and hence highly-structured developmental processes correlate, in animals, with the presence of nervous systems, i.e., they occur in the Ctenophores, Cnidarians, and Bilaterals [102]. While local cell density, cell-type distribution, or other micro-environmental features may serve as external temporal markers, at least some developmental processes appear to be intrinsically timed. Tail regeneration in Xenopus, for example, synchronizes with the developmental stage at tail amputation to produce a tail appropriate to the overall size of organism once regeneration is complete [142].
Cephalopod, bird, and mammalian nervous systems implement short-term linear time RFs that enable highly-structured, feedback-responsive, ordered behaviors, e.g., pursuit, defense, courtship, or nest-building that span minutes to days. Many longer-term behavior patterns, however, are tied to monthly or annual cycles, not to linear time. Hence, it is not clear how long a purely-internal, linear time RF can be sustained. Even in humans, long linear time is supported by external RFs, including communal memory enabled by language and stable environmental records such as calendars (see Section 5 below).

4. How Is Attention Switched between Objects?

4.1. Active Inference Requires Attention

Active inference, at its most basic, is a trade-off between acting on the environment to meet expectations and learning from the environment to modify expectations [41,42]. Even with only a single stimulus, and hence a single component for which environmental variational free energy (VFE) is measured, the importance—encoded in the theory as Bayesian precision—placed on input versus expected values affects the balance between expectation-meeting and expectation-updating. Action on the environment can, moreover, include both exploitative actions that meet current expectations and exploratory actions that increase the probability of learning [143]. Enacting these distinctions requires a prioritization or attention system. Spreading VFE across multiple distinguishable stimuli increases the need for attention to allocate exploitative, exploratory, or passive learning resources. Even in E. coli, competition among chemoreceptors for control of flagellar motion implements a rudimentary form of attention.
Attention reorients action, at least temporarily, from one stimulus or goal to another. Even in organisms with sufficient cognitive resources to pursue multiple goals or action plans simultaneously, including humans, the focus of attention is generally unitary (animals such as cephalopods with highly-distributed central nervous systems may be an exception; see [144]). One can, therefore, expect attentional control to localize on processors with high fan-in from sensors and high fan-out to effectors. At the cellular level, this is the defining characteristic of “bow-tie” networks [145]; the CheY system of E. coli provides a rudimentary example, and the integrative role of Ca 2 + in eukaryotic cells [146] a more highly ramified one (see [147] for additional examples). A bow-tie network is, effectively, a means of “broadcasting” a control signal to multiple recipients. In the mammalian central nervous system, this is the function of the proposed “global neuronal workspace” (GNW) [148,149,150,151,152], an integrative network of long-range connections between frontal and parietal cortices and the midbrain. Significantly, the GNW is the proposed locus of conscious attentional control, whether proactive or reactive.

4.2. What Is the RF for Salience?

Being a target for attention requires salience. How does the significance, current or potential, of a stimulus for an organism become a marker for salience? If everything detectable is meaningful, what makes some detectable states or objects more meaningful than others?
E. coli provides one of the simplest examples of salience regulation: the induction of the lac operon [153]. Glucose is salient to E. coli by default; when glucose is absent, lac operon induction renders lactose salient. This simple example generalizes across phylogeny in three distinct but related ways:
  • Some states or objects are salient by default, e.g., threats, food sources, or mating opportunities.
  • Salience is inducible. Antigens induce antibody production, amplifying their salience. Removal of a sensory capability, e.g., sight, enhances the salience of phenomena detected by surviving senses, e.g., audition or touch.
  • Control of salience is distributed over multicomponent regulatory networks, e.g., GRNs or functional neural networks.
Control systems that regulate salience are, effectively, RFs for salience. They enhance or repress meaningfulness in a context-dependent way. What, however, defines a context?
While the notion of context is often treated informally, Dzhafarov and colleagues [154,155,156] have developed a formal theory of context-dependent observation, couched in the language of classical probability theory, that is general enough to capture the irreversibility (technically, non-commutativity) of context switches characteristic of quantum theory [157] (see [66,158] for a Bayesian implementation). Key to this approach is the idea that contexts are defined by sets of observables that are, in typical cases, not salient. As an example, consider the stalk supporting the lure of an anglerfish [159]. The stalk (a modified fin spine) is visible and is a critical marker of context, but in a fatal case of inattentional blindness, is ignored by prey that attack the lure.
When context is defined in this salience-dependent way, the control of salience becomes relational, and therefore global. Salience-control networks can thus be expected to display the kind of global semantic dependencies exhibited by semantic-net representations of word or concept meaning [160] or indeed, fully-connected networks of any kind, e.g., hyperlink dependencies on the internet. Not surprisingly, the “salience network” of the human connectome is a high-level loop connecting insular and cigulate cortices with reward and emotion-processing areas in the midbrain, and it is strongly coupled to the GNW and to proactive and reactive attention systems [161].

4.3. Salience Allocation Differences Self-Amplify

Differences in salience allocation between lineages, or between individuals within a lineage, will clearly contribute to differences in resource exploitation. More interesting, however, is their contribution to differences in resource exploration. Mutants of E. coli deficient in glucose uptake or metabolism, for example, must find and exploit environmental niches that provide other metabolizable sugars. In general, differences in salience allocation will result in different experiential and hence learning histories. Differences in learning can be expected to generate, in turn, additional differences in salience allocation. These differences may lead, as in the case of E. coli lacking glucose transporters, to niche segregation and possibly eventual speciation.
Lateral gene transfer (LGT) provides a means of sharing DNA between distantly-related lineages and is ubiquitous in the microbial world [83]. Genes or operons providing novel metabolic abilities, e.g., antibiotic resistance, are often shared by LGT. Viewed from the perspective of salience allocation, LGT is a communication mechanism that enables a donor to alter, possibly radically, the allocation of salience and hence attention by the recipient. Compared to LGT, quorum sensing provides for faster communication and hence more responsive salience reallocation, typically only among near kin as in M. xanthus biofilms [7]. These mechanisms provide models for the ubiquitous use of inter-individual and even interspecies communication to induce changes in salience allocation in both plants and animals. Diversification of communication signals, and hence of means to negotiate salience, may be a substantial driver, not just a consequence, of speciation [162].

5. How Are Memories Stored and Accessed?

Recognizing, responding to, and communicating meanings all require memory. As discussed in [35], the spatial scales of biologically-encoded memories span at least ten orders of magnitude, and their temporal scales almost twenty orders of magnitude. The evolution of life can be viewed as the evolution of memory, not only at the genome scale, but at all scales.

5.1. Heritable Memories Encode Morphology and Function

Jacob and Monod end their famous paper on the lac operon with the following words ([153] p. 354):
The discovery of regulator and operator genes, and of repressive regulation of the activity of structural genes, reveals that the genome contains not only a series of blue-prints, but a coordinated program of protein synthesis and the means of controlling its execution.
The biological memory implemented by the genome, Jacob and Monod discovered, encodes structure and function. Evolutionary change, even when restricted to the level of the genome, can affect not only components, but how, when, where, and in response to what they are made. The increased efficiency with which evolution can explore morphological and functional space by copying and modifying genetic regulatory systems is the key insight of evo-devo [163,164]. It can be generalized to evolution at all scales [35,36,37].
Heritable memories require reproductive cells. In multicellular organisms in which most cells are reproductively repressed, developmental biology and regenerative, cancer, and general stem-cell biology provide, respectively, opportunities to study heritable memory as encoded by germline and non-germline reproductive cells. These memories are encoded on multiple substrates, from molecular structures and concentration ratios through architectome organization to cellular-scale bioelectric fields [35]. They are enacted by cell proliferation, differentiation, migration, cooperation, and competition, i.e., by building a morphology that functions in the specified way.
The available RFs of organisms such as E.coli are specified by heritable memories, e.g., the genes coding for pathway components such as CheY and its kinases and phosphatases and the cytoplasmic conditions that allow such components to function. Heritable memories specify many RFs in multicellular organisms, including humans; examples include photoreceptors responsive to specific frequencies and olfactory receptors responsive to specific compounds. Such memories may be genome independent, e.g., the body-axis polarity RFs specified by the sperm-entry point in ascidians [165] or by bioelectric polarity [166,167] or axon orientation [168] in regenerating planaria.

5.2. Experiential Memories and Learning

Learning is ubiquitous in animals. It has also been demonstrated in paramecia [169] and in slime molds such as physarum [170], and possibly in plants [171,172,173]; hence learning does not require neurons, let alone networks of neurons [15]. Learning can adjust the salience of stimuli, modify the responses to stimuli, and at least in birds and mammals, introduce novel categories and hence induce the construction of novel RFs. While in many cases learning requires multiple exposures or extended experience with a stimulus, birds and mammals at least are also capable of one-shot learning [174]. Episodic memories in humans, for example, are all results of one-shot learning: each records a single, specific event.
Experimental work in numerous invertebrate and vertebrate systems has contributed significantly to understanding the neural implementation of particular experiential memories. Such “engrams” may be implemented by single cells, local ensembles of cells, or extended networks connecting cell ensembles in different parts of the brain, and hence may be encoded by patterned activity at the intracellular up to the functional-network scale [48,49]. The dendritic trees of individual neurons, particularly cortical pyramidal cells, long-distance von Economo neurons, or cerebellar Perkinje cells, are complex temporal signal-processing systems [175,176] that may be individually capable of computations as complex as time-windowed exclusive-or [133]. The functionality of these signal processors is reversibly modulated bioelectrically over short times and epigenetically over longer times [177,178]. Hence, neurons have far greater functionality than the simple sum-threshold units employed in typical artificial neural networks (ANNs), as has been understood by designers of “neuromorphic” computing systems since the late 1980s [179]. Both individual neurons and extended networks of neurons can, therefore, be expected to implement “learning algorithms” far more complex than those designed for typical ANNs. Intriguing recent results suggest, for example, that some neurons are either intrinsically, or are primed by some (possibly epigenetic) mechanism to be, more excitable than neighboring cells, and that these cells are preferentially recruited into networks that encode memories [49]. Such a prior bias toward particular cells is reminiscent of genetic pre-adaptation [180,181], suggesting a deep functional analogy between genetically and neuronally encoded memories.
Stigmergic implementation of event memories is also ubiquitous, particularly in social insects, cephalopods, birds, and mammals. Here, memory encoding requires acting on the environment, either biochemically via pheromones or other scent markers, or mechanically via techniques ranging from nest or den construction to human architecture or writing. Learning, in these cases, includes learning to both write and read externally-encoded memories. These activities, clearly, require the internal implementation of RFs to assign semantics to both the actions involved in writing and the sensory inputs involved in reading. Comparative analyses of communication systems (e.g., [182,183,184]) and tool use (e.g., [185,186,187,188]) provide particularly promising avenues to explore both commonalities (e.g., the ubiquitous role of FoxP in communication systems [189]) and differences in the learning and encoding of these RFs.

5.3. Reporting, Reconsolidation, and Error Correction

Any behavior can be considered a “report” of one or more memories. Understanding which memories are being reported is clearly an experimental and interpretative challenge, even in the case of human verbal reports. More interesting in the present context, however, is the question of whether, and if so how, reporting a memory by somehow enacting its content alters that content. Memory content change on reconsolidation is well documented in the case of human episodic memories [50,51]. Can this be expected for memories across the board? If memories are generically unstable, what kinds of redundancy or error correction systems can be expected?
It is clear that reconsolidation generically effects revision in at least one other form of memory: object tokens. As objects change state, their identifying features may change; state changes accompanying growth in individual humans are an obvious example. Object tokens must, therefore, be continually updated [122]; while previous versions may be maintained in a “historical model” of the individual, they are recalled by chaining backwards from the current version. Within a nervous system, these are all dynamic processes that involve exciting some cells and inhibiting others, i.e., they involve the same cellular-scale changes that record memories in the first place. Reactivation (recall) and reconsolidation can, therefore, be expected to cause memory revision generically.
Humans employ both communication with others and external objects, including written documents and recorded images, to supplement individual memories and hence to provide both error correction and the possibility for restoring memories that have been degraded or lost. The maintenance of a shared language, for example, requires communication and benefits from recording and writing. Words (nouns) are markers for object tokens, and hence for object-identifying RFs. We can expect the maintenance of shared RFs for external objects to generically require both communication and shared external exemplars, e.g., “canonical” objects that all within a community agree are tables, tools, or most critically, in-group or out-group members. Members of a linguistic community must, moreover, be capable of recognizing that other members agree about the status of such objects and the words used to refer to them [190].
Redundancy and “community” based error correction are, effectively, methods for distributing memory across multiple, at least partially interchangable representations. We suggest that such distributed-memory methods are employed in analogous ways at all scales of biological organization, from double-stranded DNA and the maintenance of multiple copies of the genome in most eukaryotic cells, to quorum sensing and other means by which cells “negotiate” and “agree” by exchanging molecular or bioelectric signals, to the redundancy and handshaking mechanisms employed by neural, hormonal, and immune systems. Organisms capable of preserving experiential memories across events in which the brain is substantially remodeled (e.g., in insect metamophosis) or even completely regenerated following injury (e.g., in planaria) provide compelling examples of distributed memory [191]. While we do not, in general, know the “words” being used in these memories or the RFs with which they are associated, the languages of communication and information flow are so intuitively natural that they have been employed to describe such systems even in the absence of detailed computational analysis or specific functional analogies. We suggest that viewing these systems explicitly in terms of computational models, e.g., in terms of VFE minimization and Bayesian satisficing [41,42,74,75], will prove increasingly useful.

5.4. Reconsidering the Cognitive Role of Grammatical Language

Converging evidence from functional neuroscience and cognitive, evolutionary, and social psychology has led over the past decade to a substantial rethinking of the role of grammatical language, and indeed of the symbolic constructs of “good old-fashioned” AI (GOFAI) in general, in the implementation of human cognition. One strand of this rethinking focuses on the role uniquely played by language, and by the deliberative, inner-speech driven Process-2 cognition underlying public language use, in human communication. As Adolphs [190] emphasizes, the primary selection pressures in later human evolution were social. In such a setting, language serves not only as a means of cooperative communication, but of justification, pursuasion, and deception, including self-deceptive rationalization [192,193,194,195]. These uses of language suggest a role for language on the “surface” of cognition, above the level of, e.g., situational awareness or planning. The general, cross-domain association of expertise (including expertise in the use of language) with automaticity [196,197,198] and the difficulty of achieving a fully self-consistent definition of Process-2 cognition [199] similarly suggest a surface role for both Process-2 cognition and language. Chater [200] formulates this “flat” conception of cognition as “mental [i.e., cognitive and emotional] processes are always unconscious—consciousness reports answers, but not their origins ... we are only ever conscious of the results of the brain’s interpretations—not the ‘raw’ information it makes sense of, or the intervening inferences” (p. 180, emphasis in original), a characterization that comports well with the MB condition [33], the idea that perception and action act, in all modalities, through an “interface” separating the cognitive system from the world [201], and the GNW architecture [148]. Indeed Process-2 cognition can be fully implemented by automated, Process-1 cognition in a GNW setting [202].
If language use is a modal, surface phenomenon implemented by domain-general, scale-free mechanisms such as hierarchical Bayesian satisficing [41,42], the need to identify specific neural implementations of syntactic or lexical symbols at the computational level [1,2,203] to explain language use diminishes, and possibly disappears altogether. Historically, the greatest barrier to domain-nonspecific models of language use has been recursion [6,204]. Recursive sequence learning and production is not, however, limited to language; visual processing [205] and motor planning [206,207] are now known to be both recursive and independent of language. These earlier-evolving systems may have been co-opted for language processing, a co-option supported by comparative phenotypic analysis of FoxP mutations [208]. The motor planning system is a canonical predictive-processing system [209] that can be described in terms of hierarchical Bayesian satisficing in a way largely analogous to visual processing [210]. Significantly, neuroimaging studies consistently localize numerical and mathematical reasoning, including abstract symbolic reasoning, to fronto-parietal networks that overlap visual motion-detection and motor areas, not language-processing (e.g., Broca’s) areas [211,212]. As with grammatical sentence construction, whether abstract mathematical manipulations are implemented by a hierarchical Bayesian mechanism remains unknown.

6. How Do Living Systems Represent Themselves?

One of us (ML) has previously suggested that the “self” is usefully thought of in terms of a “cognitive light cone” indicating the horizon of the goal states the agent is capable of pursuing. The spatial extent demarcates the distance across which it is able to take measurements and exert effects, and the forward and backward temporal extents indicate the system’s abilities to anticipate the future and recall the past, respectively [24]. The “past” here is the past of experienced events, and hence of memories resulting from learning. While many non-human animals clearly remember places, events, and social roles and some are capable of planning cooperative hunts or raids against neighboring populations, evidence for “mental time travel” to the past via episodic memory or the future via prospective memory is contested, and these are often regarded as human-specific [5]. It is not clear, in particular, whether any non-humans have a sufficiently flexible self-representation to represent their own past or future actions.
The status of the human self-representation also remains unclear. Electro-encephalographic (EEG) studies of experienced meditators localize the experienced emotional-agentive, narrative-agentive, and passive “witness” aspects of the self to right- and left-posterior and mid-frontal excitations, respectively [213]. The insular-cingulate-limbic loop involved in salience allocation is also involved in linking the interoceptive sense of a bodily self to both cognitive control and memory [52,53,54,55]. How this pragmatic self-representation relates to the “psychological” self characterized by beliefs, desires, ethical attitudes, and personality, and even whether this latter self exists, remain open questions [200,214,215].
It seems natural to view the pragmatic self representation as an RF, but what exactly does it measure, how is it implemented, and most critically, how do implementations at different scales relate? Levin [24,216] considers a self to be a stable unit of cooperative action, whether at the cellular, tissue, organ, organism, community, or even higher scale. At what scale does the self-representation become a distinct functional component, as it appears to be in humans, of the self it represents? Heat-shock and other stress-response proteins provide measurable indicators of cellular state to other cellular processes; one can, for example, consider the E. coli heat-shock system an RF for environmental stress. It is not, however, clear that any component of the metabolic and regulatory network of E. coli represents the state of the rest of the network in the way that the human insular-cingulate-limbic loop represents the state of the body to frontal control systems, or in the way that the somatosensory homunculus represents the body to the tactile-processing system. The functional architecture of E. coli involves many communicating components, but does not appear to involve an overall metaprocessor. Is metaprocessing, and hence the need for one or more separate self-RFs, part of the mammalian innovation of developing a cerebral cortex?
Cancer is widely considered a “rebellion” on the part of some cells against the cooperative requirements of the whole organism. Does the “new self” of a tumor emerge de novo, or is it latent, and somehow actively repressed, in well-behaved somatic cells? Does the immune system have specific RFs for cancers? The various microbial communities within a holobiont cooperate and compete with each other and with their eukaryotic host; are these interactions productively viewed as social, political, or economic, i.e., can models based on such concepts contribute predictive power that is unavailable in the languages of biochemistry, cell biology, or microbial ecology? It perhaps depends on how context-sensitive these interactions, and the RFs that support them, turn out to be.
Is it, finally, useful to think of ecosystem-scale systems or even life as a whole as “selves”? Living systems, including the living system we call “evolution”, appear to minimize VFE at every scale [37]. Minimizing VFE requires detecting prediction failures. Does this require a self?

7. Conclusion: Meaning as a Multi-Scale Phenomenon

Human beings create meanings. We have argued in this paper that the creation of meanings is not unique to humans, or even to “complex” organisms, but is a ubiquitous characteristic of living systems. Descartes was wrong to view nonhumans as mindless automata: all bodies are managed by “minds” capable of specifically recognizing environmental states and, in some cases, spatially-located persistent objects, switching attention when needed, and constructing memories. The basic principles of cognition are, as suggested in [24], scale-free.
What changes, however, does this suggest for biology? The language of cognition provides abstractions, like memory and attention that generalize over wide ranges of biological phenomena. The concept of a reference frame, this time from physics, is similarly wide-ranging. The “lumping” enabled by these abstractions suggests common, general, computational mechanisms with scale-specific implementations. The often-noted structural similarities between regulatory pathways and neurofunctional networks [217], and particularly the possibility that bow-tie regulatory networks function as GNW-type broadcast architectures, suggest that such common mechanisms exist, a suggestion reinforced by the utility of active inference as an information-processing model on multiple scales.
Further development of specific abilities to manipulate existing biological RFs at the scale at which they are implemented, e.g., bioelectric manipulations of body-axis polarity in planaria [166,167] or optogenetic manipulations of particular memories in mice [49] will begin to construct a vocabulary of functional localization, analogous to the vocabulary of functional networks in the mammalian brain. The true test of these ideas, however, lies in the possibility of implementing new RFs from scratch in synthetic biological systems [216,218,219,220,221]. Paraphrasing Feynman, making it is a way of understanding it [222].
Throughout this paper, we have illustrated the use of multiple languages and techniques, at multiple scales, in organisms from microbes to humans, to understand the biology of meaning. We have also, contra [223], shown how concepts originating in the cognitive sciences apply to biological systems across the board. Disciplinary barriers, like trade barriers, benefit entrenched interests by interrupting the flow of ideas and techniques that might otherwise transform ways of thinking and experimental methods. We hope in this essay to have contributed somewhat to their dissolution.

Author Contributions

Conceptualization, C.F. and M.L.; writing—original draft preparation, C.F.; writing—review and editing, C.F. and M.L. All authors have read and agreed to the published version of the manuscript.


The work of M.L. was supported by the Barton Family Foundation and the Templeton World Charity Foundation (No. TWCF0089/AB55).

Conflicts of Interest

The authors declare no conflict of interest.


The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANNArtificial Neural Network
GNWGlobal Neuronal Workspace
GOFAIGood Old-Fashioned AI
GRNGene Regulation Network
LGTLateral Gene Transfer
MYMillion Year
pspicosecond ( 10 12 second)
RFReference Frame
VFEVariational Free Energy


  1. Fodor, J.A. The Language of Thought; Harvard University Press: Cambridge, MA, USA, 1975. [Google Scholar]
  2. Hinzen, W. What is un-Cartesian linguistics? Biolinguistics 2014, 8, 226–257. [Google Scholar]
  3. Evans, J.B.T. Dual processing accounts of reasoning, judgement and social cognition. Annu. Rev. Psychol. 2008, 59, 255–278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Gentner, D. Why we’re so smart. In Language and Mind: Advances in the Study of Language and Thought; Gentner, D., Goldin-Meadow, S., Eds.; MIT Press: Cambridge, MA, USA, 2003; pp. 195–235. [Google Scholar]
  5. Suddendorf, T.; Corballis, M.C. The evolution of foresight: What is mental time travel, and is it unique to humans? Behav. Brain Sci. 2007, 30, 299–351. [Google Scholar] [CrossRef] [PubMed]
  6. Hauser, M.D.; Chomsky, N.; Fitch, W.T. The faculty of language: What is it, who has it, and how did it evolve? Science 2002, 298, 1569–1579. [Google Scholar] [CrossRef] [PubMed]
  7. Muñoz-Dorado, J.; Marcos-Torres, F.J.; García-Bravo, E.; Moraleda-Muxnxoz, A.; Pérez, J. Myxobacteria: Moving, killing, feeding, and surviving together. Front. Microbiol. 2016, 7, 781. [Google Scholar] [CrossRef] [Green Version]
  8. Calvo, P.; Kiejzer, F. Plant Cognition. In Plant-Environment Interactions, Signaling and Communication in Plants; Baluška, Ed.; Springer: Berlin, Germany, 2009; pp. 247–266. [Google Scholar]
  9. Dreyfus, H.L. What Computers Can’t Do: A Critique of Artificial Reason; Harper and Row: New York, NY, USA, 1972. [Google Scholar]
  10. Searle, J.R. Minds, brains, and programs. Behav. Brain Sci. 1980, 3, 417–424. [Google Scholar] [CrossRef] [Green Version]
  11. Fodor, J.A. The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology; MIT Press: Cambride, MA, USA, 2000. [Google Scholar]
  12. Froese, T.; Taguchi, S. The problem of meaning in AI and robotics: Still with us after all these years. Philosophies 2019, 4, 14. [Google Scholar] [CrossRef] [Green Version]
  13. Gagliano, M.; Grimonprez, M. Breaking the silence—Language and the making of meaning in plants. Ecopsychology 2015, 7, 145–152. [Google Scholar] [CrossRef] [Green Version]
  14. Lyon, P. The cognitive cell: Bacterial behavior reconsidered. Front. Microbiol. 2015, 6, 264. [Google Scholar] [CrossRef]
  15. Baluška, F.; Levin, M. On having no head: Cognition throughout biological systems. Front. Psychol. 2016, 7, 902. [Google Scholar] [CrossRef] [Green Version]
  16. Newen, A.; De Bruin, B.; Gallagher, S. 4E cognition: Historical roots, key concepts, and central issues. In The Oxford Handbook of 4E Cognition; Newen, A., De Bruin, B., Gallagher, S., Eds.; Oxford University Press: Oxford, UK, 2018; pp. 3–15. [Google Scholar]
  17. Kull, K.; Deacon, T.; Emmeche, C.; Hoffmeyer, J.; Stjernfelt, F. Theses on biosemiotics: Prolegomena to a theoretical biology. In Towards a Semiotic Biology: Life Is the Action of Signs; Emmeche, C., Kull, K., Eds.; Imperial College Press: London, UK, 2011; pp. 25–41. [Google Scholar]
  18. Maturana, H.R.; Varela, F.J. Autopoesis and Cognition: The Realization of the Living; D. Reidel: Dordrecht, The Netherlands, 1980. [Google Scholar]
  19. Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  20. Pattee, H.H. Cell psychology. Cogn. Brain Theory 1982, 5, 325–341. [Google Scholar]
  21. Stewart, J. Cognition = Life: Implications for higher-level cognition. Behav. Process. 1996, 35, 311–326. [Google Scholar] [CrossRef]
  22. di Primio, F.; Müller, B.F.; Lengeler, J.W. Minimal cognition in unicellular organisms. In From Animals to Animats; Meyer, J.A., Berthoz, A., Floreano, D., Roitblat, H.L., Wilson, S.W., Eds.; International Society For Adaptive Behavior: Honolulu, HI, USA, 2000; pp. 3–12. [Google Scholar]
  23. Miller, W.B.; Torday, J.S. Four domains: The fundamental unicell and post-Darwinian cognition-based evolution. Prog. Biophys. Mol. Biol. 2018, 140, 49–73. [Google Scholar] [CrossRef] [PubMed]
  24. Levin, M. The computational boundary of a ‘self’: Developmental bioelectricity drives multicellularity and scale-free cognition. Front. Psychol. 2019, 10, 2688. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Gibson, J.J. The Ecological Approach to Visual Perception; Houghton Mifflin: Boston, MA, USA, 1979. [Google Scholar]
  26. Turvey, M.T.; Shaw, R.E.; Reed, E.S.; Mace, M.W. Ecological laws of perceiving and acting: In reply to Fodor and Pylyshyn (1981). Cognition 1981, 9, 237–304. [Google Scholar] [CrossRef]
  27. Michaels, C.F.; Carello, C. Direct Perception; Prentice-Hall: Englewood Cliffs, NJ, USA, 1981. [Google Scholar]
  28. Chemero, A. Radical embodied cognitive science. Rev. Gen. Psychol. 2013, 17, 145–150. [Google Scholar] [CrossRef]
  29. Di Paolo, E.; Thompson, E. The enactive approach. In The Routledge Handbook of Embodied Cognition; Shapiro, L., Ed.; Routledge: London, UK, 2014; pp. 68–78. [Google Scholar]
  30. Anderson, M.L. Embodied cognition: A field guide. Artif. Intell. 2003, 149, 91–130. [Google Scholar] [CrossRef] [Green Version]
  31. Froese, T.; Ziemke, T. Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artif. Intell. 2009, 173, 466–500. [Google Scholar] [CrossRef] [Green Version]
  32. Spencer, J.P.; Perone, S.; Johnson, J.S. Dynamic field theory and embodied cognitive dynamics. In Toward a Unified Theory of Development Connectionism and Dynamic System Theory Reconsidered; Spencer, J.P., Ed.; Oxford University Press: Oxford, UK, 2009; pp. 86–118. [Google Scholar]
  33. Clark, A. How to knit your own Markov blanket: Resisting the Second Law with metamorphic minds. In Philosophy and Predictive Processing: 3; Metzinger, T., Wiese, W., Eds.; MIND Group: Frankfurt am Main, Germany, 2017. [Google Scholar] [CrossRef]
  34. Bbichakjian, B.H. Language: From sensory mapping to cognitive construct. Biolinguistics 2012, 6, 247–258. [Google Scholar]
  35. Fields, C.; Levin, M. Multiscale memory and bioelectric error correction in the cytoplasm-cytoskeleton- membrane system. WIRES Syst. Biol. Med. 2018, 10, e1410. [Google Scholar] [CrossRef]
  36. Fields, C.; Levin, M. Integrating evolutionary and developmental thinking into a scale-free biology. BioEssays 2020, 42, 1900228. [Google Scholar] [CrossRef] [PubMed]
  37. Fields, C.; Levin, M. Does evolution have a target morphology? Organisms 2020, 4, 57–76. [Google Scholar]
  38. Bartlett, S.D.; Rudolph, T.; Spekkens, R.W. Reference frames, superselection rules, and quantum information. Rev. Mod. Phys. 2007, 79, 555–609. [Google Scholar] [CrossRef] [Green Version]
  39. Rao, R.P.; Ballard, D.H. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 1999, 2, 79–87. [Google Scholar] [CrossRef]
  40. Knill, D.C.; Pouget, A. The Bayesian brain: The role of uncertainty in neural coding and computation. Trends Neurosci. 2004, 27, 712–719. [Google Scholar] [CrossRef] [PubMed]
  41. Friston, K.J. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef] [PubMed]
  42. Friston, K.J. Life as we know it. J. R. Soc. Interface 2013, 10, 20130475. [Google Scholar] [CrossRef] [Green Version]
  43. Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 2013, 36, 181–204. [Google Scholar] [CrossRef]
  44. Hohwy, J. The Predictive Mind; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  45. Spratling, M.W. Predictive coding as a model of cognition. Cogn. Process. 2016, 17, 279–305. [Google Scholar] [CrossRef] [Green Version]
  46. Corbetta, M.; Shulman, G.L. Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 2002, 3, 201–215. [Google Scholar] [CrossRef]
  47. Vossel, S.; Geng, J.J.; Fink, G.R. Dorsal and ventral attention systems: Distinct neural circuits but collaborative roles. Neurosci. 2014, 20, 150–159. [Google Scholar] [CrossRef] [PubMed]
  48. Eichenbaum, H. Still searching for the engram. Learn. Behav. 2016, 44, 209–222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Josselyn, S.A.; Tonegawa, S. Memory engrams: Recalling the past and imagining the future. Science 2020, 367, eaaw4325. [Google Scholar] [CrossRef] [PubMed]
  50. Nadel, L.; Hupbach, A.; Gomez, R.; Newman-Smith, K. Memory formation, consolidation and transformation. Neurosci. Biobehav. Rev. 2012, 36, 1640–1645. [Google Scholar] [CrossRef]
  51. Schwabe, L.; Nader, K.; Pruessner, J.C. Reconsolidation of human memory: Brain mechanisms and clinical relevance. Biol. Psychiatry 2014, 76, 274–280. [Google Scholar] [CrossRef] [Green Version]
  52. Craig, A.D. The sentient self. Brain Struct. Funct. 2010, 214, 563–577. [Google Scholar] [CrossRef]
  53. Northoff, G. Brain and self—A neurophilosophical account. Child Adolesc. Psychiatry Ment. Health 2013, 7, 28. [Google Scholar] [CrossRef] [Green Version]
  54. Seth, A.K. Interoceptive inference, emotion, and the embodied self. Trends Cogn. Sci. 2013, 17, 565–573. [Google Scholar] [CrossRef]
  55. Seth, A.K.; Tsakiris, M. Being a beast machine: The somatic basis of selfhood. Trends Cogn. Sci. 2018, 22, 969–981. [Google Scholar] [CrossRef] [Green Version]
  56. Bateson, G. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology; Jason Aronson: Northvale, NJ, USA, 1972. [Google Scholar]
  57. Roederer, J. Information and Its Role in Nature; Springer: Berlin, Germany, 2005. [Google Scholar]
  58. Darwin, C. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life; Murray: London, UK, 1859; Available online: (accessed on 9 November 2020).
  59. Schrodinger, E. What is Life? Cambridge University Press: Cambridge, UK, 1944. [Google Scholar]
  60. Lovelock, J.E.; Margulis, L. Atmospheric homeostasis by and for the biosphere: The Gaia hypothesis. Tellus 1974, 26, 2–10. [Google Scholar] [CrossRef] [Green Version]
  61. Kauffman, S.A. The Origins of Order: Self Organization and Selection in Evolution; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  62. Bartlett, S.; Wong, M.L. Defining Lyfe in the Universe: From three privileged functions to four pillars. Life 2020, 10, 42. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Hermida, M. Life on Earth is an individual. Theory Biosci. 2016, 135, 37–44. [Google Scholar] [CrossRef] [PubMed]
  64. Mariscal, C.; Doolittle, W.F. Life and life only: A radical alternative to life definitionism. Synthese 2020, 197, 2975–2989. [Google Scholar] [CrossRef]
  65. Fields, C.; Marcianò, A. Markov blankets are general physical interaction surfaces. Phys. Life Rev. 2020, 33, 109–111. [Google Scholar] [CrossRef]
  66. Fields, C.; Glazebrook, J.F. Representing measurement as a thermodynamic symmetry breaking. Symmetry 2019, 12, 810. [Google Scholar] [CrossRef]
  67. Fields, C.; Marcianò, A. Holographic screens are classical information channels. Quantum Rep. 2020, 2, 22. [Google Scholar] [CrossRef]
  68. Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Devel. 1961, 5, 183–195. [Google Scholar] [CrossRef]
  69. Parrondo, J.M.R.; Horowitz, J.M.; Sagawa, T. Thermodynamics of information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  70. Moore, E.F. Gedankenexperiments on sequential machines. In Autonoma Studies; Shannon, C.W., McCarthy, J., Eds.; Princeton University Press: Princeton, NJ, USA, 1956; pp. 129–155. [Google Scholar]
  71. Fields, C. Some consequences of the thermodynamic cost of system identification. Entropy 2018, 20, 797. [Google Scholar] [CrossRef] [Green Version]
  72. Fields, C. Building the observer into the system: Toward a realistic description of human interaction with the world. Systems 2016, 4, 32. [Google Scholar] [CrossRef] [Green Version]
  73. Fields, C. Sciences of observation. Philosophies 2018, 3, 29. [Google Scholar] [CrossRef] [Green Version]
  74. Friston, K.; Levin, M.; Sengupta, B.; Pezzulo, G. Knowing one’s place: A free-energy approach to pattern regulation. J. R. Soc. Interface 2015, 12, 20141383. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Kuchling, F.; Friston, K.; Georgiev, G.; Levin, M. Morphogenesis as Bayesian inference: A variational approach to pattern formation and control in complex biological systems. Phys. Life Rev. 2020, 33, 88–108. [Google Scholar] [CrossRef] [PubMed]
  76. Hume, D. An Enquiry Concerning Human Understanding; A. Millar: London, UK, 1748; Available online: (accessed on 9 November 2020).
  77. Pattee, H.H. The physics of symbols: Bridging the epistemic cut. Biosystems 2001, 60, 5–21. [Google Scholar] [CrossRef]
  78. Micali, G.; Endres, R.G. Bacterial chemotaxis: Information processing, thermodynamics, and behavior. Curr. Opin. Microbiol. 2016, 30, 8–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Tweedy, L.; Thomason, P.A.; Paschke, P.I.; Martin, K.; Machesky, L.M.; Zagnoni, M.; Insall, R.H. Seeing around corners: Cells solve mazes and respond at a distance using attractant breakdown. Science 2020, 369, eaay9792. [Google Scholar] [CrossRef]
  80. Baron, S.; Eisenbach, M. CheY acetylation is required for ordinary adaptation time in Escherichia coli chemotaxis. FEBS Lett. 2017, 591, 1958–1965. [Google Scholar] [CrossRef]
  81. Fields, C.; Marcianò, A. Sharing nonfungible information requires shared nonfungible information. Quant. Rep. 2019, 1, 252–259. [Google Scholar] [CrossRef] [Green Version]
  82. Herculano-Houzel, S. Scaling of brain metabolism with a fixed energy budget per neuron: Implications for neuronal activity, plasticity and evolution. PLoS ONE 2011, 6, e17514. [Google Scholar] [CrossRef] [Green Version]
  83. Robbins, R.J.; Krishtalka, L.; Wooley, J.C. Advances in biodiversity: Metagenomics and the unveiling of biological dark matter. Stand. Genom. Sci. 2016, 11, 69. [Google Scholar] [CrossRef] [Green Version]
  84. Cabezón, E.; Ripoll-Rozada, J.; Peña, A.; de la Cruz, F.; Arechaga, I. Towards an integrated model of bacterial conjugation. FEMS Microbiol. Rev. 2015, 39, 81–95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Chen, A.H.; Lubkowicz, D.; Yeong, V.; Chang, R.L.; Silver, P.A. Transplantability of a circadian clock to a noncircadian organism. Sci. Adv. 2015, 1, e1500358. [Google Scholar] [CrossRef] [PubMed]
  86. Barkow, J.; Cosmides, L.; Tooby, J. The Adapted Mind: Evolutionary Psychology and the Generation of Culture; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
  87. Buss, D.M. (Ed.) The Handbook of Evolutionary Psychology; John Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  88. Cook, N.D.; Carvalho, G.B.; Damasio, A. From membrane excitability to metazoan psychology. Trends Neurosci. 2014, 37, 698–705. [Google Scholar] [CrossRef] [PubMed]
  89. Capra, E.J.; Laub, M.T. Evolution of two-component signal transduction systems. Annu. Rev. Microbiol. 2012, 66, 325–347. [Google Scholar] [CrossRef] [Green Version]
  90. Loh, K.M.; van Amerongen, R.; Nusse, R. Generating cellular diversity and spatial form: Wnt signaling and the evolution of multicellular animals. Dev. Cell 2016, 38, 643–655. [Google Scholar] [CrossRef] [Green Version]
  91. Li, M.; Liu, J.; Zhang, C. Evolutionary history of the vertebrate mitogen activated protein kinases family. PLoS ONE 2011, 6, e26999. [Google Scholar] [CrossRef]
  92. Fischer, A.H.L.; Smith, J. Evo–Devo in the era of gene regulatory networks. Integr. Comp. Biol. 2012, 52, 842–849. [Google Scholar] [CrossRef] [Green Version]
  93. Nickel, M. Evolutionary emergence of synaptic nervous systems: What can we learn from the non-synaptic, nerveless Porifera? Invertebr. Biol. 2010, 129, 1–16. [Google Scholar] [CrossRef] [Green Version]
  94. Roshchina, V.V. New trends and perspectives in the evolution of neurotransmitters in microbial, plant, and animal cells. In Microbial Endocrinology: Interkingdom Signaling in Infectious Disease and Health; Lyte, M., Ed.; Springer: Cham, Switzerland, 2016; pp. 25–77. [Google Scholar]
  95. Csaba, G. The hormonal system of the unicellular Tetrahymena: A review with evolutionary aspects. Acta Microbiol. Immunol. Hungarica 2012, 59, 131–156. [Google Scholar] [CrossRef] [Green Version]
  96. Campbell, R.K.; Satoh, N.; Degnan, B.M. Piecing together evolution of the vertebrate endocrine system. Trends Genet. 2004, 20, 359–366. [Google Scholar] [CrossRef]
  97. Levin, M. Morphogenetic fields in embryogenesis, regeneration, and cancer: Non-local control of complex patterning. Biosystems 2012, 109, 243–261. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Levin, M. Molecular bioelectricity: How endogenous voltage potentials control cell behavior and instruct pattern regulation in vivo. Mol. Biol. Cell. 2014, 25, 3835–3850. [Google Scholar] [CrossRef] [PubMed]
  99. Levin, M.; Martyniuk, C.J. The bioelectric code: An ancient computational medium for dynamic control of growth and form. BioSystems 2018, 164, 76–93. [Google Scholar] [CrossRef] [PubMed]
  100. Arendt, D.; Tosches, M.A.; Marlow, H. From nerve net to nerve ring, nerve cord and brain—Evolution of the nervous system. Nat. Rev. Neurosci. 2016, 17, 61–72. [Google Scholar] [CrossRef]
  101. Varoqueaux, F.; Fasshauer, D. Getting nervous: An evolutionary overhaul for communication. Annu. Rev. Genet. 2017, 51, 455–476. [Google Scholar] [CrossRef]
  102. Fields, C.; Bischof, J.; Levin, M. Morphological coordination: A common ancestral function unifying neural and non-neural signaling. Physiology 2020, 35, 16–30. [Google Scholar] [CrossRef]
  103. Guerrero, R.; Margulis, L.; Berlanga, M. Symbiogenesis: The holobiont as a unit of evolution. Int. Microbiol. 2013, 16, 133–143. [Google Scholar]
  104. Gilbert, S.F. Symbiosis as the way of eukaryotic life: The dependent co-origination of the body. J. Biosci. 2014, 39, 201–209. [Google Scholar] [CrossRef] [Green Version]
  105. Thiery, S.; Kaimer, C. The predation strategy of Myxococcus Xanthus. Front. Microbiol. 2020, 11, 2. [Google Scholar] [CrossRef]
  106. Turner, J.S. Extended phenotypes and extended organisms. Biol. Philos. 2004, 19, 327–352. [Google Scholar] [CrossRef]
  107. Schultz, T.R.; Brady, S.G. Major evolutionary transitions in ant agriculture. Proc. Natl. Acad. Sci. USA 2008, 105, 5435–5440. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  108. Rakison, D.H.; Yermolayeva, Y. Infant categorization. WIRES Cogn. Sci. 2010, 1, 894–905. [Google Scholar] [CrossRef] [PubMed]
  109. Baillargeon, R.; Stavans, M.; Wu, D.; Gertner, Y.; Setoh, P.; Kittredge, A.K.; Bernard, A. Object individuation and physical reasoning in infancy: An integrative account. Lang. Learn. Dev. 2012, 8, 4–46. [Google Scholar] [CrossRef]
  110. Yan, D.; Lin, X. Shaping morphogen gradients by proteoglycans. Cold Spring Harb. Perspect. Biol. 2009, 1, a002493. [Google Scholar] [CrossRef] [PubMed]
  111. Clause, K.C.; Barker, T.H. Extracellular matrix signaling in morphogenesis and repair. Curr. Opin. Biotechnol. 2013, 24, 830–833. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Gogna, R.; Shee, K.; Moreno, E. Cell competition during growth and regeneration. Annu. Rev. Genet. 2015, 49, 697–718. [Google Scholar] [CrossRef] [PubMed]
  113. Madan, E.; Gogna, R.; Moreno, E. Cell competition in development: Information from flies and vertebrates. Curr. Opin. Cell Biol. 2018, 55, 150–157. [Google Scholar] [CrossRef] [PubMed]
  114. Martin, A. The representation of object concepts in the brain. Annu. Rev. Psychol. 2007, 58, 25–45. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Zimmer, H.D.; Ecker, U.K.H. Remembering perceptual features unequally bound in object and episodic tokens: Neural mechanisms and their electrophysiological correlates. Neurosci. Biobehav. Rev. 2010, 34, 1066–1079. [Google Scholar] [CrossRef]
  116. Keifer, M.; Pulvermüller, F. Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. Cortex 2012, 7, 805–825. [Google Scholar] [CrossRef]
  117. Clarke, A.; Tyler, L.K. Understanding what we see: How we derive meaning from vision. Trends Cogn. Sci. 2015, 19, 677–687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  118. Fields, C. Visual re-identification of individual objects: A core problem for organisms and AI. Cogn. Process. 2016, 17, 1–13. [Google Scholar] [CrossRef]
  119. Yau, J.M.; Kim, S.S.; Thakur, P.H.; Bensmaia, S.J. Feeling form: The neural basis of haptic shape perception. J. Neurophysiol. 2016, 115, 631–642. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  120. Fields, C. Object Permanence. In Encyclopedia of Evolutionary Psychological Science; Shackelford, T.K., Weekes-Shackelford, V.A., Eds.; Springer: New York, NY, USA, 2017; Chapter 2373. [Google Scholar]
  121. Eichenbaum, H.; Yonelinas, A.R.; Ranganath, C. The medial temporal lobe and recognition memory. Annu. Rev. Neurosci. 2007, 30, 123–152. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  122. Fields, C. The very same thing: Extending the object token concept to incorporate causal constraints on individual identity. Adv. Cogn. Psychol. 2012, 8, 234–247. [Google Scholar] [CrossRef]
  123. Pitts, W.; McCulloch, W.S. How we know universals: The perception of auditory and visual forms. Bull. Math. Biophys. 1947, 9, 127–147. [Google Scholar] [CrossRef]
  124. Sasaki, Y.; Vuffel, W.; Knutsen, T.; Tyler, C.; Tootell, R. Symmetry activates extrastriate visual cortex in human and nonhuman primates. Proc. Natl. Acad. Sci. USA 2005, 102, 3159–3163. [Google Scholar] [CrossRef] [Green Version]
  125. Fields, C. How humans solve the frame problem. J. Expt. Theor. Artif. Intell. 2013, 25, 441–456. [Google Scholar] [CrossRef]
  126. Dietrich, E.; Fields, C. Equivalence of the Frame and Halting problems. Algorithms 2020, 13, 175. [Google Scholar] [CrossRef]
  127. Wigner, E.P. Remarks on the mind-body question. In The Scientist Speculates; Good, I.J., Ed.; Heinemann: London, UK, 1961; pp. 284–302. [Google Scholar]
  128. Schwartz, J.M.; Stapp, H.P.; Beauregard, M. Quantum physics in neuroscience and psychology: A neurophysical model of mind-brain interaction. Philos. Trans. R. Soc. B 2005, 360, 1309–1327. [Google Scholar] [CrossRef] [Green Version]
  129. Hameroff, S.; Penrose, R. Consciousness in the universe: A review of the ‘OrchOR’ theory. Phys. Life Rev. 2014, 11, 39–78. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  130. Fields, C. If physics is an information science, what is an observer? Information 2012, 3, 92–123. [Google Scholar] [CrossRef] [Green Version]
  131. Niehrs, C. On growth and form: A Cartesian coordinate system of Wnt and BMP signaling specifies bilaterian body axes. Development 2010, 137, 845–857. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  132. Chichili, G.R.; Rodgers, W. Cytoskeleton-membrane interactions in membrane raft structure. Cell. Mol. Life Sci. 2009, 66, 2319–2328. [Google Scholar] [CrossRef] [Green Version]
  133. Gidon, A.; Zolnik, T.A.; Fidzinski, P.; Bolduan, F.; Papoutsi, A.; Panaylota, P.; Holtkamp, M.; Vida, I.; Larkum, M.E. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 2020, 367, 83–87. [Google Scholar] [CrossRef]
  134. Penfield, W.; Boldrey, E. Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain 1937, 60, 389–443. [Google Scholar] [CrossRef]
  135. Gardner, J.L.; Merriam, E.P.; Movshon, J.A.; Heeger, D.J. Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. J. Neurosci. 2008, 28, 3988–3999. [Google Scholar] [CrossRef] [Green Version]
  136. Moser, E.I.; Kropff, E.; Moser, M.-B. Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci. 2008, 31, 31–89. [Google Scholar] [CrossRef] [Green Version]
  137. Abrams, N.E.; Primack, J.R. Cosmology and 21st century culture. Science 2001, 293, 1769–1770. [Google Scholar] [CrossRef] [Green Version]
  138. Johnson, C.H. Precise circadian clocks in prokaryotic cyanobacteria. Curr. Issues Mol. Biol. 2004, 6, 103–110. [Google Scholar]
  139. Doherty, C.J.; Kay, S.A. Circadian control of global gene expression patterns. Annu. Rev. Genet. 2010, 44, 419–444. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  140. Chakrabarti, S.; Michor, F. Circadian clock effects on cellular proliferation: Insights from theory and experiments. Curr. Opin. Cell Biol. 2020, 67, 17–26. [Google Scholar] [CrossRef] [PubMed]
  141. Kuchen, E.E.; Becker, N.B.; Claudino, N.; Höfer, T. Hidden long-range memories of growth and cycle speed correlate cell cycles in lineage trees. eLife 2020, 9, e51002. [Google Scholar] [CrossRef] [PubMed]
  142. Beck, C.W.; Izpisúa Belmonte, J.C.I.; Christen, B. Beyond early development: Xenopus as an emerging model for the study of regenerative mechanisms. Dev. Dyn. 2009, 238, 1226–1248. [Google Scholar] [CrossRef]
  143. Friston, K.; Rigoli, F.; Ognibene, D.; Mathys, C.; FitzGerald, T.; Pezzulo, G. Active inference and epistemic value. Cogn. Neurosci. 2015, 6, 187–214. [Google Scholar] [CrossRef]
  144. Godfrey-Smith, P. Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness; Farrar, Straus and Giroux: New York, NY, USA, 2016. [Google Scholar]
  145. Wang, D.; Jin, S.; Zou, X. Crosstalk between pathways enhances the controllability of signalling networks. IET Syst. Biol. 2016, 10, 2–9. [Google Scholar] [CrossRef]
  146. Brodskiy, P.A.; Zartmann, J.J. Calcium as a signal integrator in developing epithelial tissues. Phys. Biol. 2018, 15, 051001. [Google Scholar] [CrossRef]
  147. Niss, K.; Gomez-Casado, C.; Hjaltelin, J.X.; Joeris, T.; Agace, W.W.; Belling, K.G.; Brunak, S. Complete topological mapping of a cellular protein interactome reveals bow-tie motifs as ubiquitous connectors of protein complexes. Cell Rep. 2020, 31, 107763. [Google Scholar] [CrossRef]
  148. Dehaene, S.; Naccache, L. Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 2001, 79, 1–37. [Google Scholar] [CrossRef]
  149. Baars, B.J.; Franklin, S. How conscious experience and working memory interact. Trends Cogn. Sci. 2003, 7, 166–172. [Google Scholar] [CrossRef] [Green Version]
  150. Baars, B.J.; Franklin, S.; Ramsoy, T.Z. Global workspace dynamics: Cortical “binding and propagation” enables conscious contents. Front. Psychol. 2013, 4, 200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  151. Dehaene, S.; Charles, L.; King, J.-R.; Marti, S. Toward a computational theory of conscious processing. Curr. Opin. Neurobiol. 2014, 25, 76–84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  152. Mashour, G.A.; Roelfsema, P.; Changeux, J.-P.; Dehaene, S. Conscious processing and the Global Neuronal Workspace hypothesis. Neuron 2020, 105, 776–798. [Google Scholar] [CrossRef] [PubMed]
  153. Jacob, F.; Monod, J. Genetic regulatory mechanisms in the synthesis of proteins. J. Mol. Biol. 1961, 3, 318–356. [Google Scholar] [CrossRef]
  154. Dzhafarov, E.N.; Kujala, J.V.; Cervantes, V.H. Contextuality-by-default: A brief overview of concepts and terminology. In Lecture Notes in Computer Science 9525; Atmanspacher, H., Filik, T., Pothos, E., Eds.; Springer: Berlin, Germeny, 2016; pp. 12–23. [Google Scholar]
  155. Dzhafarov, E.N.; Cervantes, V.H.; Kujala, J.V. Contextuality in canonical systems of random variables. Philos. Trans. R. Soc. A 2017, 375, 20160389. [Google Scholar] [CrossRef]
  156. Dzharfarov, E.N.; Kon, M. On universality of classical probability with contextually labeled random variables. J. Math. Psych. 2018, 85, 17–24. [Google Scholar] [CrossRef] [Green Version]
  157. Mermin, N.D. Hidden variables and the two theorems of John Bell. Rev. Mod. Phys. 1993, 65, 803–815. [Google Scholar] [CrossRef]
  158. Fields, C.; Glazebrook, J.F. Information flow in context-dependent hierarchical Bayesian inference. J. Expt. Theor. Artif. Intell. 2020. [Google Scholar] [CrossRef]
  159. Pietsch, T.W.; Grobecker, D.B. The compleat angler: Aggressive mimicry in an Antennariid anglerfish. Science 1978, 201, 369–370. [Google Scholar] [CrossRef]
  160. Sowa, J.F. (Ed.) Principles of Semantic Networks: Explorations in the Representation of Knowledge; Morgan Kauffman: San Mateo, CA, USA, 2014. [Google Scholar]
  161. Uddin, L.Q. Salience processing and insular cortical function and dysfunction. Nat. Rev. Neurosci. 2015, 16, 55–61. [Google Scholar] [CrossRef]
  162. Schaefer, H.M.; Ruxton, G.D. Signal diversity, sexual selection and speciation. Annu. Rev. Ecol. Evol. Syst. 2015, 46, 573–592. [Google Scholar] [CrossRef]
  163. Müller, G.B. Evo-devo: Extending the evolutionary synthesis. Nat. Rev. Genet. 2007, 8, 943–949. [Google Scholar] [CrossRef] [PubMed]
  164. Carroll, S.B. Evo-devo and an expanding evolutionary synthesis: A genetic theory of morphological evolution. Cell 2008, 134, 25–36. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  165. Sardet, C.; Paix, A.; Prodon, F.; Dru, P.; Chenevert, J. From oocyte to 16-cell stage: Cytoplasmic and cortical reorganizations that pattern the ascidian embryo. Dev. Dyn. 2007, 36, 1716–1731. [Google Scholar] [CrossRef] [PubMed]
  166. Durant, F.; Morokuma, J.; Fields, C.; Williams, K.; Adams, D.S.; Levin, M. Long-term, stochastic editing of regenerative anatomy via targeting endogenous bioelectric gradients. Biophys. J. 2017, 112, 2231–2243. [Google Scholar] [CrossRef] [Green Version]
  167. Durant, F.; Bischof, J.; Fields, C.; Morokuma, J.; LaPalme, J.; Hoi, A.; Levin, M. The role of early bioelectric signals in the regeneration of planarian anterior-posterior polarity. Biophys. J. 2019, 116, 948–961. [Google Scholar] [CrossRef] [Green Version]
  168. Pietak, A.; Bischof, J.; LaPalme, J.; Morokuma, J.; Levin, M. Neural control of body-plan axis in regenerating planaria. PLoS Comput. Biol. 2019, 15, e1006904. [Google Scholar] [CrossRef]
  169. Armus, H.L.; Montgomery, A.R.; Jellison, J.L. Discrimination learning in paramecia (P. caudatum). Psychol. Rec. 2006, 56, 489–498. [Google Scholar] [CrossRef] [Green Version]
  170. Shirakawa, T.; Gunji, Y.-P.; Miyake, Y. An associative learning experiment using the plasmodium of Physarum polycephalum. Nano Commun. Net. 2011, 2, 99–105. [Google Scholar] [CrossRef]
  171. Karpiński, S.; Szechyxnxska-Hebda, M. Secret life of plants: From memory to intelligence. Plant Signal Behav. 2010, 5, 1391–1394. [Google Scholar] [CrossRef] [Green Version]
  172. Abramson, C.I.; Chicas-Mosier, A.M. Learning in plants: Lessons from Mimosa pudica. Front. Psychol. 2016, 7, 417. [Google Scholar] [CrossRef] [Green Version]
  173. Gagliano, M.; Vyazovskiy, V.V.; Borbély, A.A.; Grimonprez, M.; Depczynski, M. Learning by association in plants. Sci. Rep. 2016, 6, 38427. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  174. Lee, S.W.; O’Doherty, J.P.; Shimojo, S. Neural computations mediating one-shot learning in the human brain. PLoS Biol. 2015, 13, e1002137. [Google Scholar] [CrossRef] [PubMed]
  175. Major, G.; Larkum, M.E.; Schiller, J. Active properties of neocortical pyramidal neuron dendrites. Annu. Rev. Neurosci. 2013, 36, 1–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  176. Eyal, G.; Verhoog, M.B.; Test-Silva, G.; Deitcher, Y.; Benavides-Piccione, R.; DeFelipe, J.; de Kock, C.P.J.; Mansvelder, H.D.; Segev, I. Human cortical pyramidal neurons: From spines to spikes via models. Front. Cell. Neurosci. 2018, 12, 181. [Google Scholar] [CrossRef] [PubMed]
  177. Day, J.J.; Sweatt, J.D. Cognitive neuroepigenetics: A role for epigenetic mechanisms in learning and memory. Neurobiol. Learn. Mem. 2011, 96, 2–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  178. Marshall, P.; Bredy, T.W. Cognitive neuroepigenetics: The next evolution in our understanding of the molecular mechanisms underlying learning and memory? NPJ Sci. Learn. 2016, 16014. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  179. Schuman, C.D.; Potok, T.E.; Patton, R.M.; Birdwell, D.; Dean, M.E.; Rose, G.S.; Plank, J.S. A survey of neuromorphic computing and neural networks in hardware. arXiv 2017, arXiv:1705.06963v1. [Google Scholar]
  180. Foster, P.L. Adaptive mutation: Implications for evolution. BioEssays 2000, 22, 1067–1074. [Google Scholar] [CrossRef] [Green Version]
  181. Mitchell, A.; Romano, G.H.; Groisman, B.; Yona, A.; EDekel, E.; Kupiec, M.; Dahan, O.; Pilpel, Y. Adaptive prediction of environmental changes by microorganisms. Nature 2009, 460, 220–224. [Google Scholar] [CrossRef]
  182. Corballis, M.C. The evolution of language. Proc. N. Y. Acad. Sci. 2009, 1156, 19–43. [Google Scholar] [CrossRef] [PubMed]
  183. Bro-Jorgensen, J. Dynamics of multiple signalling systems: Animal communication in a world in flux. Trends Ecol. Evol. 2010, 25, 292–300. [Google Scholar] [CrossRef] [PubMed]
  184. Hebets, E.A.; Barron, A.B.; Balakrishnan, C.N.; Hauber, M.E.; Mason, P.H.; Hoke, K.L. A systems approach to animal communication. Proc. R. Soc. B 2016, 283, 20152889. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  185. Iwaniuk, A.N.; Lefebvre, L.; Wylie, D.R. The comparative approach and brain-behaviour relationships: A tool for understanding tool use. Can. J. Exp. Psychol. 2009, 63, 150–159. [Google Scholar] [CrossRef] [Green Version]
  186. Lefebvre, L. Brains, innovations, tools and cultural transmission in birds, non-human primates, and fossil hominins. Front. Hum. Neurosci. 2013, 7, 245. [Google Scholar] [CrossRef] [Green Version]
  187. McGrew, W.C. Is primate tool use special? Chimpanzee and New Caledonian crow compared. Philos. Trans. R. Soc. B 2013, 368, 20120422. [Google Scholar] [CrossRef] [Green Version]
  188. Navarrete, A.F.; Reader, S.M.; Street, S.E.; Whalen, A.; Lal, K.N. The coevolution of innovation and technical intelligence in primates. Philos. Trans. R. Soc. B 2016, 371, 20150186. [Google Scholar] [CrossRef] [Green Version]
  189. Fisher, S.E.; Scharff, C. FOXP2 as a molecular window into speech and language. Trends Genet. 2009, 25, 166–177. [Google Scholar] [CrossRef]
  190. Adolphs, R. The social brain: Neural basis of social knowledge. Annu. Rev. Psychol. 2009, 60, 693–716. [Google Scholar] [CrossRef] [Green Version]
  191. Blackiston, D.J.; Shomrat, T.; Levin, M. The stability of memories during brain remodeling: A perspective. Commun. Integr. Biol. 2015, 8, e1073424. [Google Scholar] [CrossRef] [Green Version]
  192. Henriques, G. The Tree of Knowledge system and the theoretical unification of psychology. Rev. Gen. Psychol. 2003, 7, 150–182. [Google Scholar] [CrossRef]
  193. Mercier, H.; Sperber, D. Why do humans reason? Arguments for an argumentative theory. Behav. Brain Sci. 2011, 34, 57–111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  194. Trivers, R.L. The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life; Basic Books: New York, NY, USA, 2011. [Google Scholar]
  195. Cushman, F. Rationalization is rational. Behav. Brain Sci. 2020, 43, e28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  196. Bargh, J.A.; Ferguson, M.J. Beyond behaviorism: On the automaticity of higher mental processes. Psychol. Bull. 2000, 126, 925–945. [Google Scholar] [CrossRef] [PubMed]
  197. Bargh, J.A.; Schwader, K.L.; Hailey, S.E.; Dyer, R.L.; Boothby, E.J. Automaticity in social-cognitive processes. Trends Cogn. Sci. 2012, 16, 593–605. [Google Scholar] [CrossRef] [PubMed]
  198. Csikszentmihályi, M. Flow: The Psychology of Optimal Experience; Harper and Row: New York, NY, USA, 1990. [Google Scholar]
  199. Melnikoff, D.E.; Bargh, J.A. The mythical number two. Trends Cogn. Sci. 2018, 22, 280–293. [Google Scholar] [CrossRef] [PubMed]
  200. Chater, N. The Mind is Flat. The Remarkable Shallowness of the Improvising Brain; Allen Lane: London, UK, 2018. [Google Scholar]
  201. Hoffman, D.D.; Singh, M.; Prakash, C. The interface theory of perception. Psychon. Bull. Rev. 2015, 22, 1480–1506. [Google Scholar] [CrossRef] [Green Version]
  202. Fields, C.; Glazebrook, J.F. Do Process-1 simulations generate the epistemic feelings that drive Process-2 decision-making? Cogn. Process. 2020. [Google Scholar] [CrossRef]
  203. Gallistel, C.R. The neurobiological bases for the computational theory of mind. In On Concepts, Modules, and Language; de Almeida, R.G., Gleitman, L., Eds.; Oxford University Press: New York, NY, USA, 2017; pp. 275–296. [Google Scholar]
  204. Chomsky, N. Review of B. F. Skinner, Verbal Behavior. Language 1959, 35, 26–58. [Google Scholar] [CrossRef]
  205. Martins, M.J.D.; Muršič, Z.; Oh, J.; Fitch, W.T. Representing visual recursion does not require verbal or motor resources. Cogn. Psych. 2015, 77, 20–41. [Google Scholar] [CrossRef]
  206. Vicari, G.; Adenzato, M. Is recursion language-specific? Evidence of recursive mechanisms in the structure of intentional action. Conscious. Cogn. 2014, 26, 169–188. [Google Scholar] [CrossRef] [PubMed]
  207. Martins, M.J.D.; Blanco, R.; Sammler, D.; Villringer, A. Recursion in action: An fMRI study on the generation of new hierarchical levels in motor sequences. Hum. Barin Mapp. 2019, 40, 2623–2638. [Google Scholar] [CrossRef] [PubMed]
  208. Christiansen, M.H.; Chater, N. The language faculty that wasn’t: A usage-based account of natural language recursion. Front. Psych. 2015, 6, 1182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  209. Bubic, A.; von Cramon, D.Y.; Schubotz, R. Prediction, cognition and the brain. Front. Hum. Neurosci. 2010, 4, 25. [Google Scholar] [CrossRef] [Green Version]
  210. Shipp, S.; Adams, R.A.; Friston, K.J. Reflections on agranular architecture: Predictive coding in the motor cortex. Trends Neurosci. 2013, 36, 706–716. [Google Scholar] [CrossRef] [Green Version]
  211. Fields, C. Metaphorical motion in mathematical reasoning: Further evidence for pre-motor implementation of structure mapping in abstract domains. Cogn. Process. 2013, 14, 217–229. [Google Scholar] [CrossRef]
  212. Amalric, M.; Dehaene, S. Origins of the brain networks for advanced mathematics in expert mathematicians. Proc. Natl. Acad. Sci. USA 2016, 113, 4909–4917. [Google Scholar] [CrossRef] [Green Version]
  213. Fingelkurts, A.A.; Fingelkurts, A.A.; Kallio-Tamminen, T. Selfhood triumvirate: From phenomenology to brain activity and back again. Conscious. Cogn. 2020, 86, 103031. [Google Scholar]
  214. Metzinger, T. Being No One: The Self-Model Theory of Subjectivity; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  215. Graziano, M.S.A.; Webb, T.W. A mechanistic theory of consciousness. Int. J. Mach. Conscious. 2014, 6, 163–176. [Google Scholar] [CrossRef]
  216. Levin, M. Life, death, and self: Fundamental questions of primitive cognition viewed through the lens of body plasticity and synthetic organisms. Biochem. Biophys. Res. Commun. 2020. [Google Scholar] [CrossRef]
  217. Barabási, A.-L. Scale-free networks: A decade and beyond. Science 2009, 325, 412–413. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  218. Solé, R.V.; Macia, J. Expanding the landscape of biological computation with synthetic multicellular consortia. Nat. Comput. 2013, 12, 485–497. [Google Scholar] [CrossRef]
  219. Kamm, R.D.; Bashir, R. Creating living cellular machines. Ann. Biomed. Eng. 2014, 42, 445–459. [Google Scholar] [CrossRef] [PubMed]
  220. Kriegman, S.; Cheney, N.; Bongard, J. How morphological development can guide evolution. Sci. Rep. 2018, 8, 13934. [Google Scholar] [CrossRef]
  221. Kriegman, S.; Blackiston, D.; Levin, M.; Bongard, J. A scalable pipeline for designing reconfigurable organisms. Proc. Natl. Acad. Sci. USA 2020, 117, 1853–1859. [Google Scholar] [CrossRef] [Green Version]
  222. Way, M. What I cannot create, I do not understand. J. Cell Sci. 2017, 130, 2941–2942. [Google Scholar] [CrossRef] [Green Version]
  223. Fodor, J.A. Why paramecia don’t have mental representations. Midwest Stud. Philos. 1986, 10, 3–23. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fields, C.; Levin, M. How Do Living Systems Create Meaning? Philosophies 2020, 5, 36.

AMA Style

Fields C, Levin M. How Do Living Systems Create Meaning? Philosophies. 2020; 5(4):36.

Chicago/Turabian Style

Fields, Chris, and Michael Levin. 2020. "How Do Living Systems Create Meaning?" Philosophies 5, no. 4: 36.

Article Metrics

Back to TopTop