Next Article in Journal
The Philosophy of Freedom and the History of Art: An Interdisciplinary View
Next Article in Special Issue
Contemporary Natural Philosophy and Contemporary Idola Mentis
Previous Article in Journal
The Limits of Classical Extensional Mereology for the Formalization of Whole–Parts Relations in Quantum Chemical Systems
Previous Article in Special Issue
What Is Physical Information?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Natural Morphological Computation as Foundation of Learning to Learn in Humans, Other Living Organisms, and Intelligent Machines

by
Gordana Dodig-Crnkovic
1,2
1
Department of Computer Science and Engineering, Chalmers University of Technology and the University of Gothenburg, 40482 Gothenburg, Sweden
2
School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden
Philosophies 2020, 5(3), 17; https://doi.org/10.3390/philosophies5030017
Submission received: 3 July 2020 / Revised: 10 August 2020 / Accepted: 25 August 2020 / Published: 1 September 2020
(This article belongs to the Special Issue Contemporary Natural Philosophy and Philosophies - Part 2)

Abstract

:
The emerging contemporary natural philosophy provides a common ground for the integrative view of the natural, the artificial, and the human-social knowledge and practices. Learning process is central for acquiring, maintaining, and managing knowledge, both theoretical and practical. This paper explores the relationships between the present advances in understanding of learning in the sciences of the artificial (deep learning, robotics), natural sciences (neuroscience, cognitive science, biology), and philosophy (philosophy of computing, philosophy of mind, natural philosophy). The question is, what at this stage of the development the inspiration from nature, specifically its computational models such as info-computation through morphological computing, can contribute to machine learning and artificial intelligence, and how much on the other hand models and experiments in machine learning and robotics can motivate, justify, and inform research in computational cognitive science, neurosciences, and computing nature. We propose that one contribution can be understanding of the mechanisms of ‘learning to learn’, as a step towards deep learning with symbolic layer of computation/information processing in a framework linking connectionism with symbolism. As all natural systems possessing intelligence are cognitive systems, we describe the evolutionary arguments for the necessity of learning to learn for a system to reach human-level intelligence through evolution and development. The paper thus presents a contribution to the epistemology of the contemporary philosophy of nature.

1. Introduction

Artificial intelligence in the form of machine learning is currently making impressive progress, especially in the field of deep learning (DL) [1]. Algorithms in deep learning have been inspired by the human brain, even though our knowledge about brain functions is still incomplete, yet steadily increasing. Learning here is a two-way process where computing is learning from neuroscience, while neuroscience is adopting information processing models, and this process iterates, as discussed in [2,3,4].
Deep learning is based on artificial neural networks resembling neural networks of the brain, processing huge amounts of (labelled) data by high-performance GPUs (graphical processing units) with a parallel architecture. It is (typically supervised) machine learning from examples. It is static, based on the assumption that the world behaves in a similar way and that domain of application is close to the domain where training data are obtained. However impressive and successful, deep-learning intelligence has an Achilles heel, and that is lack of common sense reasoning [5,6,7]. Its recognition of pictures is based on pixels, and small changes, even invisible for humans, can confuse deep learning algorithm, leading to very surprising errors. Bengio [5] therefore points out that deep learning is missing the capability of out-of-distribution generalization, and compositionality.
Human intelligence has two distinct mechanisms of learning according to Kahneman [8]—quick, bottom-up, from data to patterns (System 1) and slow, top-down from language to objects (System 2) which have been recognized and analyzed earlier [8,9,10]. The starting point of old AI (GOFAI) was System 2, symbolic, language, and logic-based reasoning, planning and decision making. However, it was without System 1, so it had a problem of symbol grounding, as its mappings were always in the space of representations from symbols to symbols, and never to the physical world itself.
Now deep learning has grounding for its symbols in the observed/collected/measured data, but it lacks the System 2 capabilities of generalization/symbol generation, symbol manipulation, and language that are necessary in order to get to the human-level intelligence and ability of learning and meta-learning, that is learning to learn. The step from (big) data-based System 1 to manipulation of few concepts like in high level reasoning of System 2 is suggested in [5] to proceed via concepts of agency, attention, and causality. It is expected that ‘agent perspective’ will help to put constraints on the learned representations and so to encapsulate causal variables, and affordances. Bengio proposes that “the agent perspective on representation learning should facilitate re-use of learned components in novel ways (..), enabling more powerful forms of compositional generalization, i.e., out-of-distribution generalization based on the hypothesis of localized (in time, space, and concept space) changes in the environment due to interventions of agents” [5].
This step, from System 1 (present state of DL) to System 2 (higher level cognition) will open new and even more powerful possibilities to AI. It is not the development into the unknown, as some of it on the System 2 side has earlier been proposed by GOFAI, and it is addressed in the new developments in cognitive science and neuroscience. In this article, we will focus on the modelling of System 1 and its connections to System 2 within the framework of computational model of cognition based on natural info-computation [3,4].
It should be acknowledged that the insight about the necessity of linking connectionism and symbolism is old, and already Minsky, in 1990, formulated the link in his “Logical vs. Analogical or Symbolic vs. Connectionist or Neat vs. Scruffy” [11]. For more recent reflections on the topic, see [12,13,14].
The article is structured as follows. After the introduction, learning about the world through agency is presented. Learning in the computing nature, including learning in the evolutionary perspective, is outlined in the subsequent section. We then address learning as computation in networks of agents, and info-computational learning by morphological computation. Learning to learn from raw data and up—agency from System 1 to System 2 is the last topic investigated. Conclusions and future work close the article.

2. Learning about the World through Agency

The framework for the discussion is the computing nature in the form of info-computationalism. It takes the world (Umwelt) for an agent to be information [15] with its dynamics seen as computation [16]. Information is observer relative and so is computation [17,18,19].
When discussing cognition as an embodied bioinformatic process, we use the notion of agent, i.e., a system able to act on its own behalf, pursuing an intrinsic goal [17,20]. Agency in biological systems, in the sense used here, has been explored in [21,22], where arguments are provided that the world as it appears to an agent depends on the type of agent and the type of interaction through which the agent acquires information about the world [17]. Agents communicate by exchanging messages (information), which helps them to coordinate their actions based on the information they possess and then they share through social cognition.
For something to be information, there must exist an agent for whom that is established as a “difference that makes a difference” [23]. When we argue that the fabric of the world is informational [24], the question can be asked: Who/what is the agent in that context? An agent can be seen as interacting with the points of inhomogeneity (differences that make a difference/data as atoms of information), establishing the connections between those data and the data that constitute the agent itself (a particle, a system). There are myriads of agents for which information of the world makes differences—from elementary particles to molecules, cells, organisms, societies…—all of them interact and exchange information on different levels of scale and these information dynamics are natural computation [25,26].
Our definition of agency and cognition as a property of all living organisms, is building on Maturana and Varela [27,28], and Stewart [29]. The question relevant for AI is how artifactual agents should be built in order to possess cognition and eventually even consciousness. Is it possible at all, given that cognition in living organisms is a deeply biologically rooted process? Along with reasoning, language is considered high-level cognitive activity that only humans are capable of. Increasing levels of cognition evolutionary developed in living organisms, starting from basic automatic behaviors such as found in organisms from bacteria [30,31,32,33] to insects, to increasingly complex behavior in complex multicellular life forms such as mammals [34]. Can AI “jump over” evolutionary steps in the development of cognition to reach, and even exceed, human intelligence?
While the idea that cognition is a biological process in all living organisms has been extensively discussed [27,28,29], it is not clear on which basis cognitive processes in all kinds of organisms would be accompanied by (some kind of, some degree of) consciousness. Consciousness is, according to Bengio [7], characteristics for System 2: “We closely associate conscious processing to Kahneman’s system 2 cognitive abilities [Kahneman, 2011].” Bengio adopts Baars’ global workspace theory of consciousness [35]. In the process of learning, and learning to learn, consciousness plays an important role through the process of attention, which selects only a tiny subset of information/data that is processed, instead of processing indiscriminately huge amounts of data, which is expensive from the point of view of response time and energy [7].
If we, in parallel with “minimal cognition” [36], search for “minimal consciousness” in an organism, what would that be? Opinions are divided at what point in the evolution one can say that consciousness emerged. Some would suggest as Liljenström and Århem that only humans possess consciousness, while the others are ready to recognize consciousness in animals with emotions [37,38]. From the info-computational point of view, it has been argued that cognitive agents with nervous systems are the step in evolution which first enabled consciousness in the sense of internal model with the ability of distinguishing the “self” from the “other” and provide representation of “reality” for an agent based on that distinction [4,39].

3. Learning in the Computing Nature

For naturalists, nature is the only reality [40]. Nature is described through its structures, processes, and relationships, using a scientific approach [41,42]. Naturalism studies the evolution of the entire natural world, including the life and development of human and humanity as a part of nature. Social and cultural phenomena are studied through their physical manifestations. An example of contemporary naturalist approach is the research field of social cognition with its network-based studies of social behaviors. Already Turing emphasized social character of learning [43], and was also elaborated on by Minsky [44] and Dennett [45].
Computational naturalism (pancomputationalism, naturalist computationalism, computing nature) [46,47,48], see even [3,4], is the view that the entirety of nature is a huge network of computational processes, which, according to physical laws, computes (dynamically develops) its own next state from the current one. Among prominent representatives of this approach are Zuse, Fredkin, Wolfram, Chaitin, and Lloyd, who proposed different varieties of computational naturalism. According to the idea of computing nature, one can view the time development (dynamics) of physical states as information processing (natural computation). Such processes include self-assembly, self-organization, developmental processes, gene regulation networks, gene assembly, protein–protein interaction networks, biological transport networks, social computing, evolution, and similar processes of morphogenesis (creation of form). The idea of computing nature and the relationships between two basic concepts of information and computation are explored in [17,18,19,25].
In the computing nature, cognition is a natural process, seen as a result of natural bio-chemical processes. All living organisms possess some degree of cognition, and for the simplest ones, like bacteria, cognition consists in metabolism and (my addition) locomotion [17]. This “degree” is not meant as continuous function, but as a qualitative characterization that cognitive capacities increase from simplest to the most complex organisms. The process of interaction with the environment causes changes in the informational structures that correspond to the body of an agent and its control mechanisms, which define its future interactions with the world and its inner information processing [49]. Informational structures of an agent become semantic information (i.e., get explicit metacognitive meaning through System 2, which generates metacognition for an agent) first in the case of highly intelligent agents capable of reasoning, which we know some birds are.
Recently, empirical studies have revealed an unexpected richness of cognitive behaviors (perception, information processing, memory, decision making) in organisms as simple as bacteria. [30,31,32,33]. Single bacteria are small, typically 0.5–5.0 micrometers in length, and interact with only their immediate environment. They live too short as a specific single organism to be able to memorize a significant amount of data. Biologically, bacteria are immortal at the level of the colony, as the two daughter bacteria from cell division of a parent bacterium are considered as two new individuals. Thus bacterial colonies, swarms, and films that extend to a bigger space, and can survive longer time, have longer memory, and exhibit an unanticipated complexity of behaviors that can undoubtedly be characterized as cognition [50,51], see even [45]. More fascinating cases are even simpler agents like viruses, on the border of the living, which are based on the simple principle that the most viable versions persist and multiply while others vanish [52,53]. Memory and learning are the key competences of living organisms [50], and in the simplest case, memory is based on the change of shape [54], which appears on different scales and levels of organization [55]. Fields and Levin add evolutionary perspective to the memory characterization and argue that “genome is only one of several multi-generational biological memories”. Additionally, cytoplasm and cell membrane, which characterize all of life on evolutionary timescale, preserve memory [56]. Because of complex structure of the cell, biological memory cannot be understood at one particular scale, and information is propagated and preserved in non-genomic cellular structures, which changes current understanding of biological memory [55,56]. It also forms at different time scales [57].
Starting with bacteria and archaea [58], all organisms without nervous systems cognize, that is perceive their environment, process information, learn, memorize, and communicate. As they are natural information processors, some such as slime mold, multinucleate, or multicellular Amoebozoa, has been used as natural computer/information processors to compute shortest paths. Even plants cognize, in spite of being often thought of as living systems without cognitive capacities [59]. Plants have been found to possess memory (in their bodily structures that change as a result of past events), the ability to learn (plasticity, ability to adapt through morphodynamics), and the capacity to anticipate and direct their behavior accordingly. Plants are also argued to possess rudimentary forms of knowledge, according to [60] (p. 121), [61] (p. 7), and [34] (p. 61).
Consequently, in this article we take basic cognition to be the totality of processes of self-generation/self-organization, self-regulation, and self-maintenance that enables organisms to survive processing information from the environment. The understanding of cognition as it appears in degrees of complexity in living nature can help us better understand the step between inanimate and animate matter from the first autocatalytic chemical reactions to the first autopoietic proto-cells, as well as evolution of life and learning.

Learning in the Evolutionary Perspective

A recent trend in the design is learning from nature, biomimetics. Deep learning is one of the technologies developed within the biomimetic paradigm. In the case of intelligence, we still have a lot to learn from nature about how our own brains, intelligence, and learning function. One of the strategies is to start with learning in simplest organisms, in order to uncover basic mechanisms of the process. Evolution can be seen as a process of problem-solving [34]. “From the amoeba to Einstein, the growth of knowledge is always the same: We try to solve our problems, and to obtain, by a process of elimination something approaching adequacy in our tentative solutions” [62] (p. 261). All acquired knowledge—whether it is acquired in the process of genetic evolution or in the process of individual learning—consists, this is Popper’s central claim, in the modification “of some form of knowledge, or disposition, which was there previously, and in the last instance of inborn expectations” [62] (p. 71).
Popper’s theory of the growth of knowledge through trial-and-error conjecture-based problem-solving shares basic approach with evolutionary epistemology. According to Campbell [63], all knowledge processes can be seen as the “variation and selective retention process of evolutionary adaptation” [64]. Thagard [65] criticizes Popper, Campbell, Toulmin, and others who proposed Darwinian models of the growth of (scientific) knowledge. Evolutionary epistemology emphasizes analogy between the development of biological species and scientific knowledge, based on variation, selection, and transmission. Thagard, on the other hand, holds that differences are more important than similarities, and that scientific knowledge is guided by “intentional, abductive theory construction in scientific discovery, the selection of theories according to general criteria, the achievement of progress by sustained application of criteria, and the transmission of selected theories in highly organized scientific communities.” Even though scientific knowledge is a specific, formal kind of knowledge, it is nevertheless knowledge.
This criticism of evolutionary epistemology is addressing a specific understanding of evolution, through Darwinism in the narrow sense. However, the contemporary extended evolutionary synthesis provides mechanisms beyond blind variation of narrow Darwinism, and can accommodate for learning, anticipation, and intentionality [66,67,68,69]. In a similar, broader evolutionary approach, Watson and Szathmáry ask “Can evolution learn?” in [70], and suggest that “evolution can learn in more sophisticated ways than previously realized”. Here “A system exhibits learning if its performance at some task improves with experience.” They propose new theoretical approaches to the evolution of evolvability, and the evolution of ecological organizations, among others. They refer to Turing, who made an algorithmic model of computation (Turing machine) and established the connection between learning and intelligence through an algorithmic approach [71]. The relationship between learning and evolution is established through the notion of reinforcement learning, as “reusing behaviors that have been successful in the past (reinforcement learning) is intuitively similar to the way selection increases the proportion of fit phenotypes in a population”. Watson and Szathmáry’s paper list number of different types of learning, including diverse machine learning approaches, ended with the claim that there is a clear analogy between evolution and the process of learning, and that we can better understand evolution if we see it as learning.
In spite of mentioning Turing’s pioneering work on the topic of algorithmic learning, Watson and Szathmáry assume “incremental adaptation (e.g., from positive and/or negative reinforcement)”.
Critics of the evolutionary approach argue for the impossibility of such incremental process to produce highly complex structures such as intelligent living organisms. Monkeys typing Shakespeare are often used as illustration. As an counterargument, Chaitin [72] pointed out that typing monkeys’ argument does not take into account physical laws of the universe, which dramatically limit what can be typed. Moreover, the universe is not a typewriter, but a computer, so a monkey types random input into a computer. The computer interprets the strings as programs. Or, in the words of Gershenfeld: “Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers” [73].
Sloman argued that “many of the developments in biological evolution that are so far not understood, and in some cases have gone unnoticed, were concerned with changes in information processing. The same is true of changes in individual development and learning: They often produce new forms of information processing.” He addressed this phenomenon through computational ideas about morphogenesis and meta-morphogenesis [74]. His approach offers new insight, that variation is algorithmic. To Sloman’s computational approach, I would add that steps in variation are morphological computation, which means physical computation, capable of randomly modifying genes, and executing morphological programs which do not present smooth incremental changes, but considerable jumps in properties of structures and processes. Morphological computation acts also through gene regulation, which is one more process that was unknown to both Darwin and to proponents of evolution as Modern Synthesis. Originally, genes were considered as coding for specific proteins, and it was believed that all genes were active. Gene regulation involves a mechanism that can repress or induce the expression of a gene. According to Nature [75], “These include structural and chemical changes to the genetic material, binding of proteins to specific DNA elements to regulate transcription, or mechanisms that modulate translation of mRNA”.

4. Learning as Computation in Networks of Agents

In what follows, we will focus on info-computational framework of learning. Informational structures constituting the fabric of physical nature are networks of networks, which represent semantic relations between data for an agent [18]. Information is organized in levels or layers, from quantum level to atomic, molecular, cellular, organismic, social, and so on. Computation/information processing involves data structure exchanges within informational networks, which are instructively represented by Carl Hewitt’s actor model [76]. Different types of computation emerge at different levels of organization in nature as exchanges of informational structures between the nodes (computational agents) in the network [17].
The research in computing nature/natural computing is characterized by bi-directional knowledge exchanges, through the interactions between computing and natural sciences [54]. While natural sciences are adopting tools, methodologies, and ideas of information processing, computing is broadening the notion of computation, taking information processing found in nature as computation [2,77]. Based on that, Denning argues that computing today is a natural science, the fourth great domain of science [78,79]. Computation found in nature is a physical process, where nature computes with physical bodies as objects. Physical laws govern processes of computation, which appear on many different levels of organization in nature.
With its layered computational architecture, natural computation provides a basis for a unified understanding of phenomena of embodied cognition, intelligence, and learning (knowledge generation), including meta-learning (learning to learn) [47,80]. Natural computation can be modelled as a process of exchange of information in a network of informational agents [76], i.e., entities capable of acting on their own behalf, which is Hewitt’s actor model applied to natural agents.
One sort of computation is found on the quantum-mechanical level, where agents are elementary particles, and messages (information carriers) are exchanged by force carriers, while different types of computation can be found on other levels of organization in nature. In biology, information processing is going on in cells, tissues, organs, organisms, and eco-systems, with corresponding agents and message types. In biological computing, the message carriers are chunks of information such as molecules, while in social computing, they are sentences while the computational nodes (agents) are molecules, cells, and organisms in biological computing or groups/societies in social computing [19].

5. Info-Computational Learning by Morphological Computation

The notion of computation in this framework refers to the most general concept of intrinsic computation, that is spontaneous computation processes in the nature [2,77], and which is used as a basis of designed computation found in computing machinery [81]. Intrinsic natural computation includes quantum computation [81,82], processes of self-organization, self-assembly, developmental processes, gene regulation networks, gene assembly, protein–protein interaction networks, biological transport networks, and similar. It is both analog (such as found in dynamic systems) and digital. The majority of info-computational processes are sub-symbolic and some of them are symbolic (like reasoning and languages).
Within info-computational framework, or computing nature [18], computation on a given level of organization of information presents a realization/actualization of the laws that govern interactions between its constituent parts. On the basic level, computation is manifestation of causation in the physical substrate [83]. In every next layer of organization, a set of rules governing the system switch to the new emergent regime. It remains yet to be established how this process exactly goes on in nature, and how emergent properties occur [84]. Research on natural computing is expected to uncover those mechanisms. In the words of Rozenberg and Kari: “(O)ur task is nothing less than to discover a new, broader, notion of computation, and to understand the world around us in terms of information processing” [2]. From the research in complex dynamical systems, biology, neuroscience, cognitive science, networks, concurrency etc., new insights essential for the info-computational nature are steadily coming. Here it should be mentioned that the computing nature with “bold” physical computation [85] is the maximal physicalist approach to computing. There are less radical approaches, such as taken by Horsman, Stepney, and co-authors [86,87,88], known as Abstraction/Representation theory (AR theory), where “physical computing is the use of a physical system to predict the outcome of an abstract evolution”, where computation defines the relationship between physical systems and abstract concepts/representations. Unlike AR theory, info-computationalism also embraces computation without representation, in the sense of Brooks [89] or Pfeifer [90]. Even though it is already established that the original Turing model of computation is specific and represents a human performing calculation, as pointed out by Copeland [91], even Turing himself started exploring computation beyond the Turing Machine model.
Turing’s 1952 paper [92] may be considered as a predecessor of natural computing. It addressed the process of morphogenesis by proposing a chemical model as the explanation of the development of biological patterns such as the spots and stripes on animal skin. Turing did not claim that a physical system producing patterns actually performed computation. From the perspective of computing nature, we can now argue that morphogenesis is a process of morphological computation. Informational structure (as representation of embodied physical structure) presents a program that governs computational process [93], which in its turn changes that original informational structure following/implementing/realizing physical laws.
Morphology is the central idea in our understanding of the connection between computation and information. Morphological/morphogenetic computing on that informational structure leads to new informational structures via processes of self-organization of information. Evolution itself is a process of morphological computation on a long-term scale. It is also important to take into account the second order process of morphogenesis of morphogenesis (meta-morphogenesis) as done by Sloman [74].
A closely related idea to natural computing is Valiant’s [94] view of evolution by “ecorithms”—learning algorithms that perform “probably approximately correct” (PAC) computation. Unlike the classical model of Turing machine, the “ecorithmic” computation does not give perfect results, but good enough (for an agent). That is the case for natural computing in biological agents who always act under resource constraints, especially time, energy, and material limitations, unlike Turing machine model of computation that by definition operates with unlimited resources. An older term for PAC due to Simon is “satisficing” [95] (p. 129): “Evidently, organisms adapt well enough to ‘satisfice’; they do not, in general, ‘optimize’”.

6. Learning to Learn from Raw Data and up—Agency from System 1 to System 2

Cognition is a result of a processes of morphological computation on informational structures of a cognitive agent in the interaction with the physical world, with processes going on at both sub-symbolic and symbolic levels [4]. This morphological computation establishes connections between an agent’s body, its nervous system (control), and its environment [49]. Through the embodied interaction with the informational structures of the environment, via sensory-motor coordination, information structures are induced (stimulated, produced) in the sensory data of a cognitive agent, thus establishing perception, categorization, and learning. Those processes result in constant updates of memory and other structures that support behavior, particularly anticipation. Embodied and corresponding induced informational structures (in the Sloman’s sense of virtual machine) [96] are the basis of all cognitive activities, including consciousness and language as a means of maintenance of “reality” or the representation of the world in the agent.
From the simplest cognizing agents such as bacteria to the complex biological organisms with nervous systems and brains, the basic informational structures undergo transformations through morphological computation as developmental and evolutionary form-generation—morphogenesis. Living organisms as complex agents inherit bodily structures resulting from a long evolutionary development of species. Those structures are the embodied memory of the evolutionary past [54]. They present the means for agents to interact with the world, get new information that induces embodied memories, learn new patterns of behavior, and learn/construct new knowledge. By Hebbian learning in the brain (where neurons that wire together, fire together, and habits increase probability of firing), world shapes humans’ (or an animals’) informational structures. Neural networks that “self-organize stable pattern recognition code in real-time in response to arbitrary sequences of input patterns” are an illustrative example [97].
On the fundamental level of quantum mechanical substrate, information processes represent actions of laws of physics. Physicists are already working on reformulating physics in terms of information [98,99,100,101,102,103]. This development can be related to the Wheeler’s idea “it from bit” [104] and von Weizsäcker’s ur-alternatives [105].
In the computing nature approach, nature is consisting of physical structures that form levels of organization, on which computation processes develop. It has been argued that on the lower levels of organization, finite automata or Turing machines might be an adequate model of computation, while in the case of human cognition on the level of the whole-brain, non-Turing computation is necessary, see Ehresmann [106] and Ghosh et al. [107]. Symbols on the higher levels of abstraction (System 2) are related with several possible sub-symbolic realizations, which they generalize, as Ehresmann’s models show. Zenil et al.’s work on causality by algorithmic generative models to “decompose an observation into its most likely algorithmic generative models” [108] presents one of the recent attempts to computationally/algorithmically approach causality. Algorithmic computation is a very important part of computational models defined by Turing, based on symbol manipulation. The connection to sub-symbolic is done through algorithmic information theory.
Apart from the Handbook of Natural Computing [77] that presents concrete models of natural computation, interesting work on computational modelling of biochemistry and reaction networks have been done by Cardelli [109,110,111,112], including the study of morphisms of reaction networks that link structure to function. On the side of cognitive computing, Fresco addresses the physical computation and its role in cognition [113].
Principles of morphological computing and data self-organization from biology have been applied in robotics as well. In recent years, morphological computing emerged as a new idea in robotics, see [3,4] and references therein. Initially, robotics treated separately the body as a machine, and its control as a program. Meanwhile it has become apparent that embodiment itself is fundamental for cognition, generation of behavior, intelligence, and learning. Embodiment is central because cognition arises from the interaction of brain, body, and environment [90]. Agents’ behavior develops through embodied interaction with the environment, in particular through sensory-motor coordination, when information structure is induced in the sensory data, thus leading to perception, learning, and categorization [48]. Morphological computing has also been applied in soft robotics, self-assembly systems, and molecular robotics, embodied robotics, and more. Even though the use of morphological computing in robotics is slightly different from the one in computing nature, there are common grounds and possibilities to learn from each other on the multidisciplinary level. Similar goes for the research being done in the fields of cognitive informatics and cognitive computing. There are also important connections to computational mechanics, algorithmic information dynamics (probabilistic framework of algorithmic information dynamics used for causal analysis), and neuro-symbolic computation, combining symbolic and neural processing, all of which are in different ways relevant to the topic. Those connections remain to explore in the future work.

7. Conclusions and Future Work

The info-computational approach, developed by the author, with natural morphological computation as a basis, is used to approach learning and learning to learn in humans, other living organisms, and intelligent machines. This paper is a contribution to epistemology of the philosophy of nature, proposing a new perspective on the learning process, both in artificial information processing systems such as robots and AI systems, and in natural information processing systems like living organisms.
Morphological computation is proposed as a mechanism of learning and meta-learning, necessary for connecting the pre-symbolic (pre-conscious) with the symbolic (conscious) information processing. In the framework of info-computational nature, morphological computation is information (re)structuring through computational processes which follow (implement) physical laws. It is grounded in the notion of agency, with causality represented by morphological computation.
Morphology is the central idea in understanding of the connection between computation (morphological/morphogenetical) and information. Morphology refers to form, shape, and structure. Materials represent morphology on the underlying level of organization. For the arrangements of molecular and atomic structures, material are protons, neutrons, and electrons on the level below.
Morphological computation, represented as information communication between agents/actors of the Hewitt actor model, distributed in space, where computational devices communicate asynchronously and the entire computation is generally not in any well-defined state [3]. Unlike Turing computation, which is a mathematical–logical model, Hewitt computation is a physical model. For morphological computing as information (re)structuring through computational processes which follow (implement) physical laws, Hewitt computation provides consequent formalization. On the basic level, morphological computation is natural computation in which physical objects perform computation. Symbol manipulation in this case is physical object manipulation, in the sense of Brooks “the world is its own best model”. It becomes relevant in robotics and deep learning that manage direct behavior of an agent in the physical world.
In morphological computation, cognition is the restructuring of an agent through the interaction with the world, so all living organisms possess some degree of cognition. As a result of evolution, increasingly complex living organisms arise from the simple ones, that are able to survive and adapt to their environment. It means they are able to register inputs (data) from the environment, to structure those into information, and in more developed organisms into knowledge. The evolutionary advantage of using structured, component-based approaches is improving response-time and efficiency of cognitive processes of an organism, which drives the development from organisms with learning on the System 1 level, to the ones that acquire System 2 capabilities on top of it. In more complex cognitive agents, knowledge is built upon not only reaction to input information, but also on internal information processing with intentional choices, dependent on value systems stored and organized in agents’ memory.
Knowledge generation places information and computation (communication) in focus, as information and its processing are essential structural and dynamic elements which characterize structuring of input data (data → information → knowledge → metaknowledge) by an interactive computational process going on in the agent during the adaptive interplay with the environment.
In nature, through the process of evolution and development, living systems learn to survive and thrive in their environment. Interactions present forms of reinforcement learning or Hebbian learning that make previous successful strategies preferred in the future [70]. That happens on a variety of levels of organization. On the meta-level, the meta-morphological computing (as a Sloman’s virtual machine) [96] governs learning to learn.
In the case of human learning, the brain as a network of computational agents processes information obtained through the embodied communication with the environment as well as internal information from the body. Consciousness is a process of integration of information in the brain [35], and it gets a huge amount of data/information that would be overwhelming for the brain to handle in real time, so it uses the mechanism of attention to focus on a specific subset of information, typically regarding agent-based processes in the world. There changes in the scene are the consequence of the agent’s interactions, and they are the unfolding of physical processes of morphological computations. Causality, or rather stable correlations between structures and processes in the world (from an agent’s perspective) follow from what humans learn/memorize, as they get organized internally through the Hebbian principles where neurons that fire together, wire together.
Sloman, who developed theory of Meta-morphogenesis [74], started with the idea that changes in individual development and learning of an agent produce new forms of information processing [74]. His approach offers new insight, that variation is algorithmic. The interplay between structure and process is essential for learning, as past experiences stored in structures affect the possibility of future processes and strategies of learning and learning to learn. To Sloman’s morphogenetic approach, I would propose to add that steps in variation are results of morphological computation, which means physical computation, capable of, e.g., modifying genes, and executing morphological programs which do not present smooth incremental changes, but jumps in properties of structures and processes. Morphological computation acts also through gene regulation, which is one more process that was unknown to both Darwin and to proponents of evolution as Modern Synthesis.
Since contemporary deep-learning-centered AI (dealing with human-level cognition and above) is gradually developing from the present state System 1 (connectionist, sub-symbolic) coverage towards the System 2 (symbolic), with agency, causality, consciousness, and attention as mechanisms of learning and meta-learning [5,114], it searches for mechanisms of transition between two systems. An inspiration for technology development, the human brain is of interest as the center of learning in humans, that is self-organized, resilient, fault tolerant, plastic, computationally powerful, and energetically efficient. In its development, like in the past, deep learning is inspired by nature, assimilating ideas from neuroscience, cognitive science, biology, and more. The AI approach to understanding, via decomposition and construction, is close to other computational models of nature in that it seeks testable and applicable models, based on data and information processing. Bengio’s proposal of agent-based perspective [5], necessary to proceed from System 1 to System 2 learning, can be related to the model of learning based on morphological computing.
For the future, more interdisciplinary/crossdisciplinary/transdisciplinary work remains to be done as a way to increase understanding of connections between the low level and the high level cognitive processes, learning, and meta-learning. It will also be instructive to find relations between (levels of/degrees of) cognition and consciousness as mechanisms helping to reduce the number of variables that are manipulated by an agent for the purpose of perception, reasoning, decision-making, planning acting/agency, and learning.
The goals of artificial intelligence, as well as robotics, differ from those of the computing nature and morphological computing. AI builds solutions for practical problems and in that it typically focuses on the highest possible level of intelligence, even though among the AI fields inspired by computing nature, there is developmental robotics, which has more explorative character.
The priority of info-computational naturalism is understanding and connecting knowledge about nature, while a lot of current technology is searching for inspiration in nature in pursuit of new technological solutions. Paths of the two are meeting, and mutual exchange of ideas is beneficial for both sides. Specialist sciences and philosophies also need close communication and exchange of ideas. Learning and meta-learning within computing nature is such a topic of central importance that calls for more knowledge from a variety of fields. This paper is not only the presentation of how much that is already known, but also an attempt to indicate how much more remains to be done.

Funding

This research is funded by Swedish Research Council, VR grant MORCOM@COG.

Acknowledgments

The author would like to thank anonymous reviewers for very helpful, constructive, and instructive review comments.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study, in the writing of the manuscript, or in the decision to publish the results.

References

  1. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  2. Rozenberg, G.; Kari, L. The Many Facets of Natural Computing. Commun. ACM 2008, 51, 72–83. [Google Scholar]
  3. Dodig-Crnkovic, G. Nature as a network of morphological infocomputational processes for cognitive agents. Eur. Phys. J. Spec. Top. 2017. [Google Scholar] [CrossRef] [Green Version]
  4. Dodig-Crnkovic, G. Cognition as Embodied Morphological Computation. In Philosophy and Theory of Artificial Intelligence; Studies in Applied Philosophy, Epistemology and Rational Ethics; Springer: Cham, Switzerland, 2018; Volume 44, pp. 19–23. [Google Scholar]
  5. Bengio, Y. From System 1 Deep Learning to System 2 Deep Learning (NeurIPS 2019). Available online: https://www.youtube.com/watch?v=T3sxeTgT4qc (accessed on 24 June 2020).
  6. Bengio, Y. Scaling up deep learning. In Proceedings of the KDD ’14: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; ACM: New York, NY, USA, 2014; p. 1966. [Google Scholar]
  7. Bengio, Y. The Consciousness Prior. arXiv 2019, arXiv:1709.08568v2. [Google Scholar]
  8. Kahneman, D. Thinking, Fast and Slow; Farrar, Straus and Giroux: New York, NY, USA, 2011; ISBN 9780374275631. [Google Scholar]
  9. Clark, A. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing; MIT Press: Cambridge, MA, USA, 1989; ISBN 978-0262530958. [Google Scholar]
  10. Scellier, B.; Bengio, Y. Towards a Biologically Plausible Backprop. arXiv 2016, arXiv:1602.05179, 1–17. [Google Scholar]
  11. Minsky, M. Logical vs. Analogical or Symbolic vs. Connectionist or Neat vs. Scruffy. In Artificial Intelligence at MIT, Expanding Frontiers; Winston, P.H., Ed.; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
  12. Dinsmore, J. The Symbolic and Connectionist Paradigms; Psychology Press: New York, NY, USA; London, UK, 2014; ISBN 978-0805810806. [Google Scholar]
  13. Wang, J. Symbolism vs. Connectionism: A Closing Gap in Artificial Intelligence. Available online: http://wangjieshu.com/2017/12/23/symbol-vs-connectionism-a-closing-gap-in-artificial-intelligence/ (accessed on 28 June 2020).
  14. Garcez, A.D.A.; Besold, T.R.; De Raedt, L.; Foldiak, P.; Hitzler, P.; Icard, T.; Kiihnberger, K.U.; Lamb, L.C.; Miikkulainen, R.; Silver, D.L. Neural-symbolic learning and reasoning: Contributions and challenges. In Proceedings of the AAAI Spring Symposium-Technical Report, Stanford, CA, USA, 23–25 March 2015; Dagstuhl Seminar 14381. Dagstuhl Publishing: Dagstuhl, Germany, 2015. [Google Scholar]
  15. Floridi, L. Informational realism. In Proceedings of the Selected Papers from Conference on Computers and Philosophy-Volume 37 (CRPIT ’03); Weckert, J., Al-Saggaf, Y., Eds.; Australian Computer Society, Inc.: Darlinghurst, Australia, 2003; pp. 7–12. [Google Scholar]
  16. Dodig-Crnkovic, G. Dynamics of Information as Natural Computation. Information 2011, 2, 460–477. [Google Scholar] [CrossRef] [Green Version]
  17. Dodig-Crnkovic, G. Information, Computation, Cognition. Agency-Based Hierarchies of Levels. In Fundamental Issues of Artificial Intelligence. Synthese Library, (Studies in Epistemology, Logic, Methodology, and Philosophy of Science); Müller, V., Ed.; Springer International Publishing: Cham, Switzerland, 2016; pp. 141–159. ISBN 9783319264851. Volume 376. [Google Scholar]
  18. Dodig-Crnkovic, G.; Giovagnoli, R. COMPUTING NATURE. Turing Centenary Perspective; Dodig-Crnkovic, G., Giovagnoli, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7, ISBN 978-3-642-37224-7. [Google Scholar]
  19. Dodig-Crnkovic, G. Physical computation as dynamics of form that glues everything together. Information 2012, 3, 204–218. [Google Scholar] [CrossRef] [Green Version]
  20. Froese, T.; Ziemke, T. Enactive Artificial Intelligence: Investigating the systemic organization of life and mind. Artif. Intell. 2009, 173, 466–500. [Google Scholar] [CrossRef] [Green Version]
  21. Kauffman, S. The Origins of Order: Self-Organization and Selection in Evolution; Oxford University Press: Oxford, UK, 1993; ISBN 978-0195079517. [Google Scholar]
  22. Deacon, T. Incomplete Nature. How Mind Emerged from Matter; W. W. Norton & Company: New York, NY, USA, 2011; ISBN 978-0-393-04991-6. [Google Scholar]
  23. Bateson, G. Steps to an Ecology of Mind; University of Chicago Press: Chicago, IL, USA, 1973; ISBN 9780226039053. [Google Scholar]
  24. Floridi, L. A defense of informational structural realism. Synthese 2008, 161, 219–253. [Google Scholar] [CrossRef] [Green Version]
  25. Chaitin, G. Building the World Out of Information and Computation: Is God a Programmer, Not a Mathematician? In Exploring the Foundations of Science, Thought and Reality; Wuppuluri, S., Doria, F.A., Eds.; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
  26. Dodig-Crnkovic, G. Shifting the Paradigm of Philosophy of Science: Philosophy of Information and a New Renaissance. Minds Mach. 2003. [Google Scholar] [CrossRef]
  27. Maturana, H. Biology of Cognition; Defense Technical Information Center: Urbana, IL, USA, 1970. [Google Scholar]
  28. Maturana, H.; Varela, F. Autopoiesis and cognition: The Realization of the Living; D. Reidel Publishing Company: Dordrecht, The Netherlands, 1980; ISBN 9789027710154. [Google Scholar]
  29. Stewart, J. Cognition = life: Implications for higher-level cognition. Behav. Process. 1996, 35, 311–326. [Google Scholar] [CrossRef]
  30. Ben-Jacob, E. Bacterial Complexity: More Is Different on All Levels. In Systems Biology-The Challenge of Complexity; Nakanishi, S., Kageyama, R., Watanabe, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 25–35. [Google Scholar]
  31. Ben-Jacob, E. Learning from Bacteria about Natural Information Processing. Ann. N. Y. Acad. Sci. 2009, 1178, 78–90. [Google Scholar] [CrossRef] [PubMed]
  32. Lyon, P. The cognitive cell: Bacterial behavior reconsidered. Front. Microbiol. 2015, 6, 264. [Google Scholar] [CrossRef] [PubMed]
  33. Marijuán, P.C.; Navarro, J.; del Moral, R. On prokaryotic intelligence: Strategies for sensing the environment. BioSystems 2010. [Google Scholar] [CrossRef]
  34. Popper, K. All Life Is Problem Solving; Routledge: London, UK, 1999; ISBN 978-0415249928. [Google Scholar]
  35. Baars, B.J. Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Prog. Brain Res. 2005, 150, 45–53. [Google Scholar] [CrossRef]
  36. Van Duijn, M.; Keijzer, F.; Franken, D. Principles of Minimal Cognition: Casting Cognition as Sensorimotor Coordination. Adapt. Behav. 2006, 14, 157–170. [Google Scholar] [CrossRef] [Green Version]
  37. Århem, P.; Liljenström, H. On the coevolution of cognition and consciousness. J. Theor. Biol. 1997. [Google Scholar] [CrossRef]
  38. Liljenström, H.; Århem, P. Consciousness Transitions: Phylogenetic, Ontogenetic and Physiological Aspects; Elsevier: Amsterdam, The Netherlands, 2011; ISBN 978-0-444-52977-0. [Google Scholar]
  39. Dodig-Crnkovic, G.; von Haugwitz, R. Reality Construction in Cognitive Agents through Processes of Info-Computation. In Representation and Reality in Humans, Other Living Organisms and Intelligent Machines; Dodig-Crnkovic, G., Giovagnoli, R., Eds.; Springer International Publishing: Basel, Switzerland, 2017; pp. 211–235. ISBN 978-3-319-43782-8. [Google Scholar]
  40. Putnam, H. Mathematics, Matter and Method; Cambridge University Press: Cambridge, MA, USA, 1975. [Google Scholar]
  41. Dodig-Crnkovic, G.; Schroeder, M. Contemporary Natural Philosophy and Philosophies. Philosophies 2018, 3, 42. [Google Scholar] [CrossRef] [Green Version]
  42. Dodig-Crnkovic, G.; Schroeder, M. Contemporary Natural Philosophy and Philosophies-Part 1; MDPI AG: Basel, Switzerland, 2019; ISBN 978-3-03897-822-0. [Google Scholar]
  43. Edmonds, B.; Gershenson, C. Learning, Social Intelligence and the Turing Test. In How the World Computes. CiE 2012. Lecture Notes in Computer Science; Cooper, S.B., Dawar, A., Löwe, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7318. [Google Scholar]
  44. Minsky, M. The Society of Mind; Simon and Schuster: New York, NY, USA, 1986; ISBN 0-671-60740-5. [Google Scholar]
  45. Dennett, D. From Bacteria to Bach and Back: The Evolution of Minds; W. W. Norton & Company: New York City, NY, USA, 2017; ISBN 978-0-393-24207-2. [Google Scholar]
  46. Dodig-Crnkovic, G. Investigations into Information Semantics and Ethics of Computing; Mälardalen University Press: Västerås, Sweden, 2006; ISBN 91-85485-23-3. [Google Scholar]
  47. Dodig-Crnkovic, G.; Müller, V. A Dialogue Concerning Two World Systems: Info-Computational vs. Mechanistic. In Information and Computation; Dodig Crnkovic, G., Burgin, M., Eds.; World Scientific Pub Co Inc: Singapore, 2009; pp. 149–184. ISBN 978-981-4295-47-5. [Google Scholar]
  48. Dodig-Crnkovic, G. The info-computational nature of morphological computing. In Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics; Müller, V., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 5, ISBN 00935301. [Google Scholar]
  49. Pfeifer, R.; Bongard, J. How the Body Shapes the Way We Think–A New View of Intelligence; MIT Press: Cambridge, MA, USA, 2006; ISBN 9780262162395. [Google Scholar]
  50. Witzany, G. Memory and Learning as Key Competences of Living Organisms. In Memory and Learning in Plants. Signaling and Communication in Plants; Baluska, F., Gagliano, M., Witzany, G., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2018; pp. 1–16. [Google Scholar]
  51. Witzany, G. Introduction: Key Levels of Biocommunication of Bacteria. In Biocommunication in Soil Microorganisms; Witzany, G., Ed.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  52. Witzany, G. Viruses: Essential Agents of Life; Springer Netherlands: Dodrecht, Netherlands, 2012; ISBN 9789400748996. [Google Scholar]
  53. Villarreal, L.P.; Witzany, G. Viruses are essential agents within the roots and stem of the tree of life. J. Theor. Biol. 2010. [Google Scholar] [CrossRef] [Green Version]
  54. Leyton, M. Shape as Memory Storage. In Ambient Intelligence for Scientific Discovery. Lecture Notes in Computer Science; Yang, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3345. [Google Scholar]
  55. Kandel, E.R.; Dudai, Y.; Mayford, M.R. The molecular and systems biology of memory. Cell 2014. [Google Scholar] [CrossRef] [Green Version]
  56. Fields, C.; Levin, M. Multiscale memory and bioelectric error correction in the cytoplasm–cytoskeleton-membrane system. Wiley Interdiscip. Rev. Syst. Biol. Med. 2018. [Google Scholar] [CrossRef] [PubMed]
  57. Kukushkin, N.V.; Carew, T.J. Memory Takes Time. Neuron 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Witzany, G. Biocommunication of Archaea; Springer International Publishing: Cham, Switzerland, 2017; ISBN 9783319655369. [Google Scholar]
  59. Witzany, G. Bio-communication of Plants. Nat. Preced. 2007. [Google Scholar] [CrossRef]
  60. Pombo, O.; Torres, J.M.; Rahman, S. Special Sciences and the Unity of Science; Logic, E., Ed.; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 978-94-007-9213-5. [Google Scholar]
  61. Rosen, R. Anticipatory Systems; Springer: New York, NY, USA, 1985; ISBN 978-1-4614-1268-7. [Google Scholar]
  62. Popper, K. Objective Knowledge: An Evolutionary Approach; Oxford University Press: Oxford, UK, 1972. [Google Scholar]
  63. Campbell, D.T. Evolutionary epistemology. In The Philosophy of Karl Popper; Schilpp, P.A., Ed.; Open Court Publ.: La Salle, IL, USA, 1974; Volume 1, pp. 413–463. [Google Scholar]
  64. Vanberg, V. Cultural Evolution, Collective Learning, and Constitutional Design. In Economic Thought and Political Theory; Reisman, D., Ed.; Springer: Dordrecht, The Netherlands, 1994. [Google Scholar] [CrossRef]
  65. Thagard, P. Against Evolutionary Epistemology. PSA Proc. Bienn. Meet. Philos. Sci. Assoc. 1980. [Google Scholar] [CrossRef]
  66. Kronfeldner, M.E. Darwinian “blind” hypothesis formation revisited. Synthese 2010. [Google Scholar] [CrossRef] [Green Version]
  67. Jablonka, E.; Lamb, M.J.; Anna, Z. Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life; MIT Press: Cambridge, MA, USA, 2014; ISBN 9780262322676. [Google Scholar]
  68. Laland, K.N.; Uller, T.; Feldman, M.W.; Sterelny, K.; Müller, G.B.; Moczek, A.; Jablonka, E.; Odling-Smee, J. The extended evolutionary synthesis: Its structure, assumptions and predictions. Proc. R. Soc. B Biol. Sci. 2015. [Google Scholar] [CrossRef] [PubMed]
  69. Noble, D. The Music of Life: Biology Beyond the Genome. Lavoisierfr 2006. [Google Scholar] [CrossRef]
  70. Watson, R.A.; Szathmáry, E. How Can Evolution Learn? Trends Ecol Evol. 2016, 31, 147–157. [Google Scholar] [CrossRef] [Green Version]
  71. Turing, A.M. Computing machinery and intelligence. In Machine Intelligence: Perspectives on the Computational Model; Routledge: New York, NY, USA, 2012; ISBN 0815327684. [Google Scholar]
  72. Chaitin, G. Epistemology as Information Theory: From Leibniz to Ω. In Computation, Information, Cognition–The Nexus and The Liminal; Dodig Crnkovic, G., Ed.; Cambridge Scholars Pub.: Newcastle, UK, 2007; pp. 2–17. ISBN 978-1-4438-0040-2. [Google Scholar]
  73. Neil Gershenfeld Morphogenesis for the Design of Design A Talk by. Available online: https://www.edge.org/conversation/neil_gershenfeld-morphogenesis-for-the-design-of-design (accessed on 28 June 2020).
  74. Sloman, A. Meta-Morphogenesis: Evolution and Development of Information-Processing Machinery. In Alan Turing: His Work and Impact; Cooper, S.B., van Leeuwen, J., Eds.; Elsevier: Amsterdam, The Netherlands, 2013; p. 849. ISBN 978-0-12-386980-7. [Google Scholar]
  75. Nature Gene Regulation. Available online: https://www.nature.com/subjects/gene-regulation (accessed on 28 June 2020).
  76. Hewitt, C. What is computation? Actor Model versus Turing’s Model. In A Computable Universe, Understanding Computation & Exploring Nature As Computation; Zenil, H., Ed.; World Scientific Publishing Company: Singapore, 2012. [Google Scholar]
  77. Rozenberg, G.; Bäck, T.; Kok, J.N. (Eds.) Handbook of Natural Computing; Springer: Berlin, Germany, 2012; ISBN 978-3-540-92911-6. [Google Scholar]
  78. Denning, P. Computing is a natural science. Commun. ACM 2007, 50, 13–18. [Google Scholar] [CrossRef] [Green Version]
  79. Denning, P.; Rosenbloom, P. The fourth great domain of science. ACM Commun. 2009, 52, 27–29. [Google Scholar] [CrossRef] [Green Version]
  80. Wang, Y. On Abstract Intelligence: Toward a Unifying Theory of Natural, Artificial, Machinable, and Computational Intelligence. Int. J. Softw. Sci. Comput. Intell. 2009, 1, 1–17. [Google Scholar] [CrossRef] [Green Version]
  81. Crutchfield, J.P.; Ditto, William, L.; Sinha, S. Introduction to Focus Issue: Intrinsic and Designed Computation: Information Processing in Dynamical Systems-Beyond the Digital Hegemony. Chaos 2010, 20, 037101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  82. Crutchfield, J.P.; Wiesner, K. Intrinsic Quantum Computation. Phys. Lett. A 2008, 374, 375–380. [Google Scholar] [CrossRef] [Green Version]
  83. Collier, J. Information, Causation and Computation. In Information and Computation; Dodig-Crnkovic, G., Burgin, M., Eds.; World Scientific: Singapore, 2011; pp. 89–105. [Google Scholar]
  84. Zenil, H. A Computable Universe. Understanding Computation & Exploring Nature as Computation; World Scientific: Singapore, 2012; ISBN 978-9814374293. [Google Scholar]
  85. Piccinini, G. Computation in Physical Systems. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; 2017; Available online: https://plato.stanford.edu/archives/sum2017/entries/computation-physicalsystems/ (accessed on 28 June 2020).
  86. Horsman, C.; Stepney, S.; Wagner, R.C.; Kendon, V. When does a physical system compute? Proc. R. Soc. A Math. Phys. Eng. Sci. 2014, 470, 20140182. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Horsman, D.; Kendon, V.; Stepney, S. The natural science of computing. Commun. ACM 2017, 60. [Google Scholar] [CrossRef]
  88. Horsman, D.; Kendon, V.; Stepney, S.; Young, J.P.W. Abstraction and representation in living organisms: When does a biological system compute? In Representation and Reality in Humans, Animals, and Machines. Studies in Applied Philosophy, Epistemology and Rational Ethics; Springer: Cham, Switzerland, 2017; Volume 28, pp. 91–116. [Google Scholar]
  89. Brooks, R.A. Intelligence without representation. Artif. Intell. 1991. [Google Scholar] [CrossRef]
  90. Hauser, H.; Füchslin, R.M.; Pfeifer, R. Opinions and Outlooks on Morphological Computation; e-book; 2014; ISBN 978-3-033-04515-6. Available online: https://www.morphologicalcomputation.org/e-book (accessed on 28 June 2020).
  91. Copeland, J.; Dresner, E.; Proudfoot, D.; Shagrir, O. Time to reinspect the foundations? Commun. ACM 2016, 59, 34–36. [Google Scholar] [CrossRef]
  92. Turing, A.M. The Chemical Basis of Morphogenesis. Philos. Trans. R. Soc. London 1952, 237, 37–72. [Google Scholar]
  93. Kampis, G. Self-Modifying Systems in Biology and Cognitive Science: A New Framework for Dynamics, Information, and Complexity; Pergamon Press: Amsterdam, The Netherlands, 1991; ISBN 9780080369792. [Google Scholar]
  94. Valiant, L. Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World; Basic Books: New York, NY, USA, 2013; ISBN 978-0465032716. [Google Scholar]
  95. Simon, H.A. Rational choice and the structure of the environment. Psychol. Rev. 1956, 63, 129–138. [Google Scholar] [CrossRef] [Green Version]
  96. Sloman, A.; Chrisley, R. Virtual machines and consciousness. J. Conscious. Stud. 2003, 10, 113–172. [Google Scholar]
  97. Grossberg, G.A.; Carpenter, S. ART 2: Self-organization of stable category recognition codes for analog input patterns. Appl. Opt. 1987, 26, 4919–4930. [Google Scholar]
  98. Mermin, N.D. Making better sense of quantum mechanics. Reports Prog. Phys. 2019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Müller, M.P. Law without law: From observer states to physics via algorithmic information theory. Quantum 2020. [Google Scholar] [CrossRef]
  100. Vedral, V. Information and physics. Information 2012, 3, 219–223. [Google Scholar] [CrossRef] [Green Version]
  101. Goyal, P. Information physics-towards a new conception of physical reality. Information 2012, 3, 567–594. [Google Scholar] [CrossRef]
  102. Dodig-Crnkovic, G. Information and energy/matter. Information 2012, 4, 751. [Google Scholar] [CrossRef]
  103. Fields, C. If physics is an information science, what is an observer? Information 2012, 3, 92–123. [Google Scholar] [CrossRef] [Green Version]
  104. Wheeler, J.A. Information, physics, quantum: The search for links. In Complexity, Entropy, and the Physics of Information; Zurek, W., Ed.; Addison-Wesley: Redwood City, CA, USA, 1990. [Google Scholar]
  105. Weizcsäcker, C.F. The Unity of Nature. In Physical Sciences and History of Physics; Boston Studies in the Philosophy of Science; Cohen, R.S., Wartofsky, M.W., Eds.; Springer: Dordrecht, The Netherlands, 1984; Volume 82. [Google Scholar]
  106. Ehresmann, A.C. MENS, an Info-Computational Model for (Neuro-)cognitive Systems Capable of Creativity. Entropy 2012, 14, 1703–1716. [Google Scholar] [CrossRef]
  107. Ghosh, S.; Aswani, K.; Singh, S.; Sahu, S.; Fujita, D.; Bandyopadhyay, A. Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System. Information 2014, 5, 28–100. [Google Scholar] [CrossRef] [Green Version]
  108. Zenil, H.; Kiani, N.A.; Zea, A.A.; Tegnér, J. Causal deconvolution by algorithmic generative models. Nat. Mach. Intell. 2019. [Google Scholar] [CrossRef]
  109. Cardelli, L. Artificial Biochemistry. In Algorithmic Bioprocesses; Condon, A., Harel, D., Kok, J.N., Salomaa, A., Winfree, E., Eds.; Springer: Heidelberg, Germany, 2009; pp. 429–462. [Google Scholar]
  110. Cardelli, L.; Zavattaro, G. On the computational power of biochemistry. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Horimoto, K., Regensburger, G., Rosenkranz, M., Yoshida, H., Eds.; LNCS; Springer: Heidelberg, Germany, 2008; Volume 5147, pp. 65–80. [Google Scholar]
  111. Cardelli, L. Morphisms of reaction networks that couple structure to function. BMC Syst. Biol. 2014. [Google Scholar] [CrossRef] [PubMed]
  112. Cardelli, L.; Tribastone, M.; Tschaikowski, M. From electric circuits to chemical networks. Nat. Comput. 2020. [Google Scholar] [CrossRef] [Green Version]
  113. Fresco, N. Physical Computation and Cognitive Science; Springer: Berlin, Germany, 2014; ISBN 978-3-642-41374-2. [Google Scholar]
  114. Devon Hjelm, R.; Grewal, K.; Bachman, P.; Fedorov, A.; Trischler, A.; Lavoie-Marchildon, S.; Bengio, Y. Learning deep representations by mutual information estimation and maximization. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]

Share and Cite

MDPI and ACS Style

Dodig-Crnkovic, G. Natural Morphological Computation as Foundation of Learning to Learn in Humans, Other Living Organisms, and Intelligent Machines. Philosophies 2020, 5, 17. https://doi.org/10.3390/philosophies5030017

AMA Style

Dodig-Crnkovic G. Natural Morphological Computation as Foundation of Learning to Learn in Humans, Other Living Organisms, and Intelligent Machines. Philosophies. 2020; 5(3):17. https://doi.org/10.3390/philosophies5030017

Chicago/Turabian Style

Dodig-Crnkovic, Gordana. 2020. "Natural Morphological Computation as Foundation of Learning to Learn in Humans, Other Living Organisms, and Intelligent Machines" Philosophies 5, no. 3: 17. https://doi.org/10.3390/philosophies5030017

Article Metrics

Back to TopTop