Next Article in Journal
On the Roots of Media Ecology: A Micro-History and Philosophical Clarification
Previous Article in Journal / Special Issue
Quantum Parallelism Thesis, Many World Interpretation and Physical Information Thesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Architecture of Mind as a Network of Networks of Natural Computational Processes

by
Gordana Dodig-Crnkovic
Chalmers University of Technology and University of Gothenburg, Gothenburg 41296, Sweden
Philosophies 2016, 1(1), 111-125; https://doi.org/10.3390/philosophies1010111
Submission received: 28 September 2015 / Revised: 4 December 2015 / Accepted: 4 December 2015 / Published: 14 December 2015
(This article belongs to the Special Issue Computing and Philosophy: Papers from IACAP 2014)

Abstract

:
In discussions regarding models of cognition, the very mention of “computationalism” often incites reactions against the insufficiency of the Turing machine model, its abstractness, determinism, the lack of naturalist foundations, triviality and the absence of clarity. None of those objections, however, concerns models based on natural computation or computing nature, where the model of computation is broader than symbol manipulation or conventional models of computation. Computing nature consists of physical structures that form layered computational architecture, with computation processes ranging from quantum to chemical, biological/cognitive and social-level computation. It is argued that, on the lower levels of information processing in the brain, finite automata or Turing machines may still be adequate models, while, on the higher levels of whole-brain information processing, natural computing models are necessary. A layered computational architecture of the mind based on the intrinsic computing of physical systems avoids objections against early versions of computationalism in the form of abstract symbols manipulation.

1. Critique of Classical Computationalism and a New Understanding of Computation

Historically, computationalism has been accused of many sins [1,2,3]. In what follows I would like to answer the following three concerns about computationalism put forward by Mark Sprevak:
(R1)
A lack of clarity: “Ultimately, the foundations of our sciences should be clear”. Computationalism is suspected of lacking clarity.
(R2)
Triviality: “[O]ur conventional understanding of the notion of computational implementation is threatened by trivial arguments”. Computationalism is accused of triviality.
(R3)
A lack of naturalistic foundations: “The ultimate aim of cognitive science is to offer, not just any explanation of mental phenomena, but a naturalistic explanation of the mind”. Computationalism is questioned for being formal and unnatural [2] (p. 108).
Sprevak concludes that meeting all three above expectations of computational implementation is difficult. As an illustration of the problems with computationalist approaches to mind, he presents David Chalmers’ computational formalism of combinatorial state automata and concludes that “Chalmers’ account provides the best attempt to do so (i.e., to meet the above criticism against computationalism), but even his proposal falls short”. In order to be fully appreciated, Chalmers account, I will argue, should be seen from the perspective of intrinsic, natural computation instead of the perspective of a conventional designed computer. Chalmers contends:
“Computational descriptions of physical systems need not be vacuous. We have seen that there is a well-motivated formalism, that of combinatorial state automata, and an associated account of implementation, such that the automata in question are implemented approximately when we would expect them to be: when the causal organization of a physical system mirrors the formal organization of an automaton. In this way, we establish a bridge between the formal automata of computation theory and the physical systems of everyday life. We also open the way to a computational foundation for the theory of mind”.
[4]
In the above it is important to notice the distinction between intrinsic/natural computation that describes spontaneous physical processes at different levels of organization and designed/conventional computation implemented in our technological devices that uses this intrinsic computation as its basis [5]. The designed computation in conventional computational machinery does not appear spontaneously in nature, and it is made possible by specially designed architecture and a constant supply of energy. Intrinsic computation, on the other hand, emerges inherently on different levels of organization in nature—from quantum to molecular/chemical computation, biological computation, information processing in neural networks, social computing, etc. [6].
Already in 2002, Matthias Scheutz [1], answering the critique of classical computationalism based on the Turing model of computation, proposed that new computationalism is capable of accounting for embodiment and embeddedness of mind. In this article, we will present recent developments and show what this new computationalism looks like currently and in what directions it is developing.
In the Epilogue, Scheutz makes the following apt diagnosis:
“Today it seems clear, for example, that classical notions of computation alone cannot serve as foundations for a viable theory of the mind, especially in light of the real-world, real-time, embedded, embodied, situated, and interactive nature of minds, although they may well be adequate for a limited subset of mental processes (e.g., processes that participate in solving mathematical problems). Reservations about the classical conception of computation, however, do not automatically transfer and apply to real-world computing systems. This fact is often ignored by opponents of computationalism, who construe the underlying notion of computation as that of Turing-machine computation”.
[1] (p. 176) (emphasis added)
Thus, according to Scheutz, the way to avoid the criticisms against computational models of mind is via computation performed by real-world computing systems, with “real-time; embodied; real-world constraints with which cognitive systems intrinsically cope”. The necessity of resource-awareness is thus central, as the Turing machine model of computation is not constructed with time, space, energy and other physical resources in mind.

2. Natural/Intrinsic Computation as Information Processing in Nature: Why Natural Computationalism is Not Trivial

The idea of computing nature [7,8] builds on the notion that the universe as a whole can be seen as a computational system that intrinsically computes its own next state. This approach is called pancomputationalism or natural computationalism and dates back to Konrad Zuse with his Calculating Space—Rechnender Raum [9]. Some prominent representatives of natural computationalism are Edward Fredkin [10], Stephen Wolfram [11] and Greg Chaitin [12]. For more details, see [13].
However, a clear distinction should be made between pancomputationalism of this type and unlimited pancomputationalism, described by Gualtiero Piccinini in [14], as the strongest version of pancomputationalism with the claim “that every physical system performs every computation—or, at least, every sufficiently complex system implements a large number of non-equivalent computations” [15,16]. The claim of natural computationalism in general is that every physical system performs some computation that is computation equivalent to the systems dynamics on different levels of organization.
Computation as found in nature is physical computation, described in [14] and by Nir Fresco in [17], and called “computation in materio” by Susan Stepney [18,19], and “natural computation” [20].
It should be noted that varieties of natural computationalism/pancomputationalism differ among themselves: Some of them would insist on discreteness of computation, and the idea that, on the deepest levels of description, nature can be seen as discrete. Others find the origin of the continuous/discrete distinction in the human cognitive apparatus that relies on both continuous and discrete information processing (computation), asserting thus that both discrete and continuous models are necessary [21]. For example, Seth Lloyd argues that the dual nature of quantum mechanical objects as wave/particles implies necessity of both kinds of models [22]. Regardless of the positions about the discreteness/continuity of computation, all pancomputationalists share the basic idea of computing nature performing intrinsic computation.
If the universe computes at different levels of organization, what is the most general characterization of the substrate that executes this computation? In his Open Problems piece in the Philosophy of Information, Luciano Floridi lists eighteen fundamental questions for the nascent field of the philosophy of information (and computation), which contains the following hypothesis, formulated as a question regarding “It from bit”, which can be traced back to John Archibald Wheeler [23]:
“17. The “It from Bit” hypothesis: Is the universe essentially made of informational stuff, with natural processes, including causation, as special cases of information dynamics?”
[24]
Floridi and Kenneth Sayre answer the above question in the positive and argue for the informational structural realism [25,26]; they suggest that the fabric of the universe is information and that natural processes are information dynamics. In Floridi’s approach the concept of information includes both discrete and continuous aspects [27]. Floridi emphasizes that his model of informational universe is more general than digital ontology, as noumenal reality (Kant’s reality in itself) is always dependent on the level of abstraction that the epistemic agent takes. Nir Fresco and Phillip Staines [28] question the validity of Floridi’s argument, focusing on a more limited assertion based on classical physics, showing that “a deterministic computational view of the universe faces problems (e.g., a reversible computational universe cannot be strictly deterministic)”. However, their analysis of Floridi’s argument does not observe the fact that Floridi talks about the interaction of an epistemic agent with the reality, which typically happens on one level of abstraction at a time. For an epistemic agent, reality is an informational structure, and its dynamics can be digital or analog, discrete or continuous, and, on the quantum mechanical level, it could be both, as Seth Lloyd argues in [22].
The informational fabric of the universe is always relative to an agent, as information is relational. This implies that “the universe” for a bacterium is vastly different from “the universe” for a human or for some artifactual cognitive system. That is a constructivist view of knowledge where the process of knowledge generation in a cognizing agent starts with the interactions with its environment, which, for an agent, presents potential information [29]. The potential- or proto-information of an agent’s Umwelt [30], that is, its accessible universe, actualizes as embodied and embedded information via interactions with the agent’s cognitive/computational architecture.
Ideas about informational reality come from several sources aside from philosophers of information. Currently physicists are contributing to the understanding of physical foundations of nature in terms of information. Among them are Seth Lloyd, Vlatko Vedral, Giulio Chiribella, and Philip Goyal [22,31,32,33].
The synthesis of the frameworks of natural computationalism and informational structural realism results in the framework of info-computationalism [21,34], a variety of natural computationalism developed by the author, where information presents the fabric of the universe (for an agent), while the dynamics of information can be understood as computation. Physical nature consequently spontaneously performs different kinds of computations that are information-processing at different levels of organization [35]. Again, each system performs some definite (natural) computation, and not any possible computation.
If every process in nature is some kind of natural computing/intrinsic computing, then even complex information processing in living beings is computation that includes self-reflective processes such as self-reproduction, self-healing and self-modification [36]. This means that computation is capable of acting on the very system that performs it. The Turing machine model does not accommodate for the possibility that, during the execution of an algorithm, the control mechanism might change as well. Interestingly, in the Turing Machine model, the control mechanism (the operating system, which is what makes machine execute) controls the execution of the algorithm in spite of the understanding that the Turing machine is equivalent to the algorithm itself. This separation of the algorithm (the program) and the operating system helps to avoid self-reference. The problem of self-reference (which is absent and prohibited in designed computing) is solved in natural computing. George Kampis describes the problem in the following way:
“If now somebody writes a tricky language that goes beyond the capabilities of LISP and changes its own interpreter as well, and then perhaps it changes the operating system, and so on, finally we find ourselves at the level of the processor chip of the computer that carries out the machine code instructions. Now, unlike the earlier levels, this level belongs to a piece of physical hardware, where things will be done the way the machine was once built, and this can no more be a matter of negotiations. Ultimately, this is what serves as that Archimedean starting point (similar to the initial translation that opens up self-reference) that defines a constant framework for the programs. The importance of this is that we understand: self-modification, and self-reference, are not really just issues of programming (that is, of using the right software), but of designing a whole machine in some sense. Therefore, the impossibility of achieving complete self-modification depends, ultimately, on the separability of machine from program (and the way around): the separability of software from hardware.”
[37] (p. 95) (emphasis added)
This differs from the ideas of Marcin Miłkowski, who suggests “the physical implementation of a computational system—and its interaction with the environment—lies outside the scope of computational explanation”. From that I infer that the model of computation, which Miłkowski assumes is top-down, is Turing-Machine-based designed computation. Even though he rightly argues that neural networks as well as dynamical systems, that is, mathematical models where a fixed rule (typically differential equations) describes the time dependence of a point in a geometric space, can generally be understood as computational, Miłkowski does not think of intrinsic computation as grounded in physical process driven by causal mechanisms, a characteristic of computing nature. He writes:
“For a pancomputationalist, this means that there must be a distinction between lower-level, or basic, computations and the higher level ones. Should pancomputationalism be unable to mark this distinction, it will be explanatorily vacuous”.
[38]
The above problem of the grounding of the concept of computation finds its natural solution in physical computation. If we continue and ask where physical processes and structures come from, we could equally demand physicists to explain where matter/energy and space/time come from.
Consequently, we can answer the question why natural computationalism is not trivial/vacuous in spite of the underlying assumption that the whole of the universe is computational. Intrinsic computation found in nature forms structures on different levels that parallel physical structures from subatomic quanta to macroscopic organizations. The hierarchy of levels of computation is grounded in physical/intrinsic/natural computation. This construction is not vacuous for the same reason for which physics is not vacuous, even though it makes the claim that the entire physical universe consists of matter-energy and builds on the same elementary building blocks—elementary particles. The principle of universal validity of physical laws does not make them vacuous. Thinking of computation as the implementation of physical laws on the fundamental level clearly indicates that computation can be seen as the basis of all dynamics in nature. This answers triviality objections by [3] as well as [2]. Namely, if we use Carl Hewitt’s actor model of computation [39], which mimics the model of physical interaction as an exchange of messages (force carriers), we can better understand the nature of distributed concurrent computation occurring in physical matter as natural computation. Later on, we will return to the Hewitt’s actor model.
Further evidence against the purported triviality of computationalism is provided by Fresco Nir [17], who examines arguments put forward by John Searle and Hilary Putnam. They criticize the computational view of cognition based on a narrow notion of computation, leading to the conclusion that every physical object implements any program:
“For any sufficiently complex physical object O (i.e., an object with a sufficiently large number of distinguishable parts) and for any arbitrary program P, there exists an isomorphic mapping M from some subset S of the physical states of O to the formal structure of P”.
[40] (p. 27)
Similarly, Putnam argues that “every ordinary open system simultaneously realizes every abstract inputless FSA” [15] (p. 121).
Apart from Fresco, the Searle and Putnam triviality accusations against computationalism have been met by [1,41,42] among others. These articles provide further ideas on how computing can be construed in a more general way to avoid those objections. The solution is provided by the notion of a hierarchically organized architecture of computation grounded in the intrinsic computation of nature, in which living organisms among others reach a form of symbol manipulation as a special level of computational dynamics. If computation is conceived as intrinsic/natural computation, then every physical system spontaneously computes only its own next state. To make it compute anything else, we have to construct the system around it, which can use the intrinsic computation as a part of a universal machine.
The field of intrinsic natural computation is presented in depth in the Handbook of Natural Computing [20]. Intrinsic computation on the fundamental level of natural process is analyzed in [18]. For a specific study of intrinsic computation performed by neurons in the brain, see [5,43]. A good systematic review of biological (hyper)computation is given in [44]. When it comes to human-level epistemological computation, the info-computational framework for naturalizing epistemology can be found in [42].

3. Levels of Organization, Dynamics, Causal Relations and Deacon’s Framework

It has been argued by the author [6] that natural computation in biological systems with processes of self-organization and autopoiesis is best described by agent-based models, such as Hewitt’s actor model of computation [39]. A characteristic of living organisms is that they are structured in layers, from molecular networks to cells, tissues, organs, organisms and ecologies. Bio-chemical processes in living organisms function in parallel, but the existing models of parallel computation (including Boolean networks, Petri nets, Interacting state machines, Process calculi, etc.) need adjustments in order to be able to adequately model biological systems [45].
Generation of levels of organization in living organisms can be understood using the framework proposed by Terrence Deacon [46] for information processing in living systems. He distinguishes between the following three levels of natural information (for an agent), where each subsequent level subsumes the prior level:
  • Syntactic information: Shannon theory; describes data/patterns/signals as used in data communication;
  • Semantic information: Shannon + Boltzmann theories; describes intentionality, aboutness, reference, representation, used to define the relation to object or referent;
  • Pragmatic information (behavior): Shannon + Boltzmann + Darwin theories; describes function, interpretation, used to define pragmatics of agency.
Deacon’s three levels of information organization can be seen as parallel to his three hierarchically organized formative mechanisms/processes, the three levels of emergent dynamics, and even to the three Aristotelian causes. However, it is important to keep in mind that the relationships are just suggestive of parallels between levels and not at all any sort of direct mappings.
Deacon explains the origin of dynamical levels in the relationships between material entities/structures and their corresponding processes. The hierarchy of levels of emergent dynamics constitutes dynamical depth of a system, described as follows:
“A system with greater dynamical depth than another consists of a greater number of such nested dynamical levels. Thus, a mechanical or linear thermodynamic system has less dynamical depth than an inorganic self-organized system, which has less dynamical depth than a living system. Including an assessment of dynamical depth can provide a more precise and systematic account of the fundamental difference between inorganic systems (low dynamical depth) and living systems (high dynamical depth), irrespective of the number of their parts and the causal relations between them”.
[47] (p. 404)
Dynamics generate intrinsic constraints and transitions from homeodynamics, to morphodynamics and to teleodynamics. Each transition has increasing autonomy from extrinsically imposed constraints. “Since constraints are a prerequisite for producing physical work, the increasing autonomy of constraint generation with dynamical depth also corresponds to an increasing diversity of the capacity to do work. Thus the flexibility with which a dynamical system can interact with its environment also increases with dynamical depth” [47] (p. 418) (emphasis added).
Deacon and Koutroufinis furthermore propose that, starting with patterns of organization of homeodynamics (h), morphodynamics (m), and teleodynamics (t), higher order dynamical depth are formed with teleodynamic unit systems (e.g., organisms) that are themselves recursively involved in homeodynamic (ht), morphodynamic (mt), or teleodynamic (tt) patterns of interaction. (The concept of dynamical depth can be connected to Friston’s hierarchical dynamic models (HDMs) of the brain, where hierarchical dynamical mechanisms are used to explain how brain neural networks could be configured to find sensory causes from given sensory inputs [48]. Starting with the idea “that the brain may use empirical Bayes for inference about its sensory input”, Friston generalizes to hierarchical dynamical systems).

4. Hewitt’s Model of Computation of Actors/Agents

Hewitt’s model of computation is based on Actors as the universal primitives of concurrent distributed computation [6]. An Actor can among others make local decisions, create more Actors and send messages in response to a message that it receives. He writes:
“In the Actor Model [49,50], computation is conceived as distributed in space, where computational devices communicate asynchronously and the entire computation is not in any well-defined state. (An Actor can have information about other Actors that it has received in a message about what it was like when the message was sent.) Turing’s Model is a special case of the Actor Model”.
[39]
The above “computational devices” are conceived as computational agents—informational structures capable of acting on their own behalf. Depending on how communication between agents is defined, the computation can be discrete or continuous.
For Hewitt, Actors become Agents only when they are able to process expressions for commitments such as contracts, announcements, beliefs, goals, intentions, plans, policies, procedures, requests and queries. In other words, Hewitt’s Agents are human-like or, if we broadly interpret the above capacities, cognitive Actors.
However, we can take all of Hewitt’s Actors to be agents (in the sense of agent-based models) with different levels of competence, as we are interested in a common framework encompassing all physical, chemical, biological, social as well as artifactual agents.
The advantage of Hewitt’s model is that, unlike other models of computation based on mathematical logic, set theory, algebra, etc., the Actor model is based on physics, especially quantum physics and relativistic physics [39]. Its relational nature makes it especially suitable for modeling informational structures and their dynamics.
Applicability of Hewitt’s model on different levels of organization of matter/energy is clear as we on every level have interactions between informational structures. On the fundamental physical level of elementary particles, interactions are exchange forces. Force carriers (messenger particles) interact with matter particles. For example, strong force is an exchange of pions by hadrons that are made up of quarks, which in turn exchange gluons. In that way we see the parallel between the actor model of natural computation and physical processes. In every higher level, “actors” as well as the messages they exchange are more complex structures. On the higher levels, we find networks of molecules exchanging particles or molecules, and successively higher order structures like cells exchanging molecules, or organisms exchanging different sorts of complex messages and objects.
Neurons in the brain are specialized type of cells that can also be seen as agents performing distributed concurrent computation. Daniel Dennett comments on the proposal of [51] that unifies cognitive neuroscience with comparative cognition to ascribe self-modifying agency to neurons. The Hewitt model of computation is suitable even in this case. Dennett writes:
“I have already endorsed the importance of recognizing neurons as ‘complex self-modifying’ agents, but the (ultra-)plasticity of such units can and should be seen as the human brain’s way of having something like the competence of a silicon computer to take on an unlimited variety of temporary cognitive roles, ‘implementing’ the long-division virtual machine, the French-speaking virtual machine, the flying-a-plane virtual machine, the sightreading-Mozart virtual machine and many more. These talents get ‘installed’ by various learning processes that have to deal with the neurons’ semi-autonomous native talents, but once installed, they can structure the dispositions of the whole brain so strongly that they create higher levels of explanation that are both predictive and explanatory”.
[52]
This universal computing on the virtual-machine level depends on a nested hierarchy of computation on the levels on and below the inputs from distributed networks of social computation on the level above. For more details on taxonomy of computation, see [53].

5. Mind as a Process and Computational Architecture of Mind

Of all computational approaches, the most controversial are the computational models of mind. Historically, there is a huge variety of models of mind, some of them taking mind to be a kind of immaterial substance distinct from material body, the most famous being Platonic and Cartesian dualist models. In contrast to dualists, and unlike reductive materialists, who identify body and mind, Aristotle proposed a unified approach of hylomorphism (matter-form framework)—where the soul (life), represents the form of a material body [54].
In the work On the Soul (De Anima) [55] (413a20–21), Aristotle defines a soul as “that which makes a living thing alive.” For Aristotle, soul as life is a property of every living thing, and a soul is to the body as form is to matter. Thus, hylomorphic framework captures the unity of the body and soul (=life) of an organism. This is close to the modern views of Maturana and Varela [56], where life is cognition. Aristotle’s soul would thus correspond to the modern idea of cognition. According to Aristotle, in the same way matter always possesses some form, the physical body of a living organism always possesses some kind of soul (life). They are inseparable.
And how does mind relates to soul/life in Aristotle’s scheme? Shields [54] explains:
“Aristotle describes mind (nous, often also rendered as ‘intellect’ or ‘reason’) as ‘the part of the soul by which it knows and understands’ (De Anima iii 4, 429a9–10; cf. iii 3, 428a5; iii 9, 432b26; iii 12, 434b3), thus characterizing it in broadly functional terms”.
[54]
According to Aristotle, there is a nested hierarchy of soul activities (413a23): growth, nutrition, reproduction (vegetative soul), locomotion, perception, memory, anticipation (animal soul) and mind/intellect/thought (human soul).
In the current view of natural info-computation, we would say that form (of a living being) constrains processes characterizing soul (the way of being alive) [57]. We are focused on the dynamics and processes (the other side of the phenomenon of structure/form). It is natural for computational approaches to consider mind as a process of changing form, a complex process of computation on the highest levels of organization of a cognizing agent. In his Incomplete Nature, Deacon argues:
“Because there are no material entities that are not also processes, and because processes are defined by their organization, we must acknowledge the possibility that organization itself is a fundamental determinant of physical causality. At different levels of scale and compositionality, different organizational possibilities exist. And although there are material properties that are directly inherited from lower-order component properties, it is clear that the production of some forms of process organization is only expressed by dynamical regularities at that level. So the emergence of such level-specific forms of dynamical regularity creates the foundation for level-specific forms of physical influence”.
[46] (p. 177)
Here, causality relates to first-order organization, that is, relationships between physical objects, while second order organization (such as representation of organization) does not directly interact causally, but is connected by logical laws. This indirect relationship can be seen as a virtual machine running on the physical substrate, in the sense of Aaron Sloman [58].
There are still many connections to be made in order to better explicate parallels between Aristotle’s ideas of soul as life and cognition, as well as mind as a “part of the soul that knows” and a contemporary understanding of those terms. In contemporary understanding, mind is a special kind of process, characterized by Brand Blanshard in the following:
“(M)ind is a set of processes distinguished from others through their control by an immanent end. (…) At one extreme it dwindles into mere life, which is incipient mind. At the other extreme it vanishes in the clouds; it does not yet appear what we shall be. Mind as it exists in ourselves is on an intermediate level”.
[59]
Aristotle’s understanding of soul as process of life is compatible with a contemporary understanding of cognition as life by Maturana and Varela [56], which is much broader than the currently used dictionary definition of cognition, defined as “The mental action or process of acquiring knowledge and understanding through thought, experience, and the senses” (Oxford dictionary).
Interestingly enough, parallels can be drawn between info-computational layered structure of cognition and basic dynamic depth of homeodynamic, morphodynamic and teleodynamic of [47], as well as to Aristotle’s three nested levels of soul (vegetative, animal and human) in his hylomorphic (matter-form based) view of life.
Aristotle’s soul is life itself, that is, cognition as an autopoietic process. Contemporary biology describes life as based on single cells, from which increasingly more complex organisms successively develop and evolve with more and more layers of cognitive information-processing architectures. The evolution of life and mind is driven by the capacity of living organisms to act on their own behalf and interact with their environments (agency). (Interesting for study of the relationships between body and mind is the evolution of increasingly more complex structures that lead to the emergence of new information-processing levels, from the basic mass-energetic via self-organization to autopoietic/semiotic levels. Example organisms, such as rotifers, which have around a thousand cells, a quarter of which constitute their nervous system with the brain, and the tiny Megaphragma mymaripenne wasps, which are smaller than single-celled amoebas and yet have nervous systems and brains, are particularly noteworthy).
In the context of development and evolution of complex living structures, it is important to understand the mechanism providing the coupling between an organism and its environment. Research shows that the intrinsic dynamics of a living organism create a sensitive state that reacts on the changes in the environment:
“Any system, cognitive or biological, which is able to relate internally, self-organized, stable structures (eigenvalues) to constant aspects of its own interaction with an environment can be said to observe eigenbehavior. Such systems are defined as organizationally closed because their stable internal states can only be defined in terms of the overall dynamic structure that supports them”.
[60] (p. 342)
A system in a metastable equilibrium is able to make distinctions [61], which hence constitute information as the difference that makes a difference [62] for the system. Computation as information processing is the dynamics of this network of differences [29]. For this basic metastable state of an organism capable of reacting to the relevant changes in the environment, two aspects are particularly interesting (apart from the meta-stability itself): time-dependence (dynamics/frequency) and fractality. They exist on all three levels of Table 1, but are especially interesting on the highest level, which subsumes the previous two. They are addressed in a radically new approach to the modeling of an entire brain in [63]. The central point in this model is the collective time behavior and (fractal) frequencies at the number of different levels of organization. Ghosh et al. suggest a possibility of connection between levels from QFT (Quantum Field Theory) and macroscopic levels based on the time-dependence of the coherent physical oscillators involved. It might not be a definite model of a brain, but it contains some interesting insights.
Table 1. Parallels between levels of organization of information, mechanism, dynamics and causes.
Table 1. Parallels between levels of organization of information, mechanism, dynamics and causes.
InformationMechanismDynamicsAristotle cause
SyntacticMass-energeticThermodynamicsEfficient
SemanticSelf-organizationMorphodynamicsFormal
PragmaticSelf-preservation (autopoiesis)TeleodynamicsFinal
Independently, Andre Ehresmann [64] proposed the bio-inspired Memory Evolutive Systems (MES) constructive model for a self-organized multi-scale cognitive system, able to interact with its environment through information processing. The MES brain model implements an info-computational approach where lower levels are modeled as Turing machines, while the global dynamics of the system are beyond-Turing due to the fact that the same symbol has several possible interpretations [65].
Even though we have a long way to go in working out detailed and robust naturalist models of life/cognition, brain and mind, the approaches of Ehresmann and Ghosh et al. demonstrate the utility of implementing natural computation in the modeling of the brain, thus repudiating objections that computationalism lacks naturalist foundations and clarity.

6. Conclusions

This article meets the following three concerns by Mark Sprevak about computationalism [2] (p. 108):
(R1) Lack of Clarity: “Ultimately, the foundations of our sciences should be clear”. Computationalism is suspected to lack clarity.
Naturalizing computationalism through ideas of computing nature brings clarity as well as plausibility in computationalism. It aids in understanding how embodied and situated systems can be modeled computationally, as well as how self-reflective systems in biology with different levels of cognition and mind can be understood as computational networks organized on different levels of computation.
(R2) Triviality: “(O)ur conventional understanding of the notion of computational implementation is threatened by triviality arguments”. Computationalism is accused of triviality.
This objection is answered through parallels with physics. Grounding of computation is found in intrinsic physical processes, as modeled by Hewitt’s actor model of computation.
(R3) Lack of naturalistic foundations: “The ultimate aim of cognitive science is to offer, not just any explanation of mental phenomena, but a naturalistic explanation of the mind”. Computationalism is questioned for being abstract, formal and unnatural.
The above objection is rejected by the very construction of the framework of computing nature.
In sum, the presented proposal for meeting the criticisms against computationalism consists in a generalized understanding of computation as an intrinsic natural process for which the symbol manipulation of the Turing machine model is a proper subset. Natural computation is described within the info-computational framework as information dynamics that can be found in nature on different levels of dynamics of living systems.
It has been recently suggested that even quantum level processes play an important role in the dynamics of living beings through their coherent behavior. Thus, mind can be seen as a layered network of computational processes all the way down to quantum and back all the way up to social computation in a recursive process of production of new dynamical constraints through the interactions of system parts with their environments.

Acknowledgments

The author wants to thank reviewers for their constructive and helpful reviews.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Scheutz, M. Computationalism New Directions; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  2. Sprevak, M. Three challenges to Chalmers on computational implementation. J. Cogn. Sci. (Seoul) 2012, 13, 107–143. [Google Scholar] [CrossRef]
  3. Miłkowski, M. Explaining the Computational Mind; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  4. Chalmers, D.J. Does a Rock Implement Every Finite-State Automaton? Synthese 1996, 108, 309–333. [Google Scholar] [CrossRef]
  5. Crutchfield, J.; Ditto, W.; Sinha, S. Introduction to Focus Issue: Intrinsic and Designed Computation: Information Processing in Dynamical Systems—Beyond the Digital Hegemony. Chaos 2010, 20. [Google Scholar] [CrossRef] [PubMed]
  6. Dodig-Crnkovic, G. Information, Computation, Cognition. Agency-based Hierarchies of Levels. arXiv:1311.0413. 2013. Available online: http://arxiv.org/abs/1311.0413 (accessed on 8 December 2015).
  7. Dodig-Crnkovic, G.; Giovagnoli, R. Computing Nature; Springer: Berlin, Germany; Heidelberg, Germany, 2013. [Google Scholar]
  8. Zenil, H. A Computable Universe. Understanding Computation & Exploring Nature as Computation; Zenil, H., Ed.; World Scientific Publishing Company/Imperial College Press: Singapore, Singapore, 2012. [Google Scholar]
  9. Zuse, K. Rechnender Raum; Friedrich Vieweg & Sohn: Braunschweig, Germany, 1969. [Google Scholar]
  10. Fredkin, E. Finite Nature. In Proceedings of the XXVIIth Rencotre de Moriond, Les Arcs, Savoie, France, 15–22 March 1992.
  11. Wolfram, S. A New Kind of Science; Wolfram Media: Champaign, IL, USA, 2002. [Google Scholar]
  12. Chaitin, G. Epistemology as Information Theory: From Leibniz to Ω. In Computation, Information, Cognition—The Nexus and The Liminal; Dodig Crnkovic, G., Stuart, S., Eds.; Cambridge Scholars Pub.: Newcastle, UK, 2007; pp. 2–17. [Google Scholar]
  13. Dodig-Crnkovic, G. Significance of Models of Computation from Turing Model to Natural Computation. Minds Mach. 2011, 21, 301–322. [Google Scholar] [CrossRef]
  14. Piccinini, G. Computation in Physical Systems. In Stanford Encyclopedia of Philosophy; Stanford University: Stanford, CA, USA, 2012. [Google Scholar]
  15. Putnam, H. Representation and Reality; The MIT press: Cambridge, MA, USA, 1988. [Google Scholar]
  16. Searle, J.R. The Rediscovery of the Mind; The MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  17. Fresco, N. Physical Computation and Cognitive Science; Springer Berlin Heidelberg: Berlin, Germany; Heidelberg, Germany, 2014. [Google Scholar]
  18. Stepney, S. The neglected pillar of material computation. Phys. D Nonlinear Phenom. 2008, 237, 1157–1164. [Google Scholar] [CrossRef]
  19. Stepney, S. Programming Unconventional Computers: Dynamics, Development, Self-Reference. Entropy 2012, 14, 1939–1952. [Google Scholar] [CrossRef]
  20. Rozenberg, G.; Bäck, T.; Kok, J.N. (Eds.) Handbook of Natural Computing; Springer: Berlin, Germany; Heidelberg, Germany, 2012.
  21. Dodig-Crnkovic, G.; Müller, V. A Dialogue Concerning Two World Systems: Info-Computational vs. Mechanistic. In Information and Computation; Dodig Crnkovic, G., Burgin, M., Eds.; World Scientific Publishing Company/Imperial College Press: Singapore, Singapore, 2011; pp. 149–184. [Google Scholar]
  22. Lloyd, S. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos; Knopf: New York, NY, USA, 2006. [Google Scholar]
  23. Wheeler, J.A. Information, physics, quantum: The search for links. In Complexity, Entropy, and the Physics of Information; Zurek, W., Ed.; Addison-Wesley: Redwood City, CA, USA, 1990. [Google Scholar]
  24. Floridi, L. Open Problems in the Philosophy of Information. Metaphilosophy 2004, 35, 554–582. [Google Scholar] [CrossRef]
  25. Floridi, L. Informational realism. In Selected Papers from Conference on Computers and Philosophy—Volume 37 (CRPIT’03); Weckert, J., Al-Saggaf, Y., Eds.; Australian Computer Society, Inc.: Darlinghurst, Australia, 2003; pp. 7–12. [Google Scholar]
  26. Sayre, K.M. Cybernetics and the Philosophy of Mind; Routledge & Kegan Paul: London, UK, 1976. [Google Scholar]
  27. Floridi, L. Against digital ontology. Synthese 2009, 168, 151–178. [Google Scholar] [CrossRef] [Green Version]
  28. Nir, F.; Staines, P.J. A revised attack on computational ontology. Minds Mach. 2014, 24, 101–122. [Google Scholar] [CrossRef]
  29. Dodig-Crnkovic, G. Info-computational Constructivism and Cognition. Constr. Found. 2014, 9, 223–231. [Google Scholar]
  30. Kull, K. Umwelt. In The Routledge Companion to Semiotics; Cobley, P., Ed.; Routledge: London, UK, 2010; pp. 348–349. [Google Scholar]
  31. Vedral, V. Decoding Reality: The Universe as Quantum Information; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  32. Chiribella, G.; D’Ariano, G.M.; Perinotti, P. Quantum Theory, Namely the Pure and Reversible Theory of Information. Entropy 2012, 14, 1877–1893. [Google Scholar] [CrossRef]
  33. Goyal, P. Information Physics—Towards a New Conception of Physical Reality. Information 2012, 3, 567–594. [Google Scholar] [CrossRef]
  34. Dodig-Crnkovic, G. Knowledge Generation as Natural Computation. J. Syst. Cybern. Inform. 2008, 6, 12–16. [Google Scholar]
  35. Dodig-Crnkovic, G. Physical Computation as Dynamics of Form that Glues Everything Together. Information 2012, 3, 204–218. [Google Scholar] [CrossRef] [Green Version]
  36. Kampis, G. Self-Modifying Systems in Biology and Cognitive Science: A New Framework for Dynamics, Information, and Complexity; Pergamon Press: Amsterdam, The Netherlands, 1991. [Google Scholar]
  37. Kampis, G. Computability, Self-Reference, and Self-Amendment. Commun. Cogn. Artif. Intell. 1995, 12, 91–109. [Google Scholar]
  38. Miłkowski, M. Is computationalism trivial? In Computation, Information, Cognition—The Nexus and the Liminal; Dodig-Crnkovic, G., Stuart, S., Eds.; Cambridge Scholars Press: Newcastle, UK, 2007; pp. 236–246. [Google Scholar]
  39. Hewitt, C. What is computation? Actor Model versus Turing’s Model. In A Computable Universe, Understanding Computation & Exploring Nature as Computation; Zenil, H., Ed.; World Scientific Publishing Company/Imperial College Press: Singapore, Singapore, 2012. [Google Scholar]
  40. Searle, J.R. Is the brain a digital computer? Proc. Addresses Am. Philos. Assoc. 1990, 64, 21–37. [Google Scholar] [CrossRef]
  41. Chrisley, R. Why everything doesn’t realize every computation. Minds Mach. 1994, 4, 403–420. [Google Scholar] [CrossRef]
  42. Dodig-Crnkovic, G. Epistemology Naturalized: The Info-Computationalist Approach. APA Newsl. Philos. Comput. 2007, 6, 9–13. [Google Scholar]
  43. Crutchfield, J.; Wiesner, K. Intrinsic Quantum Computation. Phys. Lett. A 2008, 374, 375–380. [Google Scholar] [CrossRef]
  44. Maldonado, C.E.; Gómez Cruz, A.N. Biological hypercomputation: A new research problem in complexity theory. Complexity 2014, 20, 8–18. [Google Scholar] [CrossRef]
  45. Fisher, J.; Henzinger, T.A. Executable cell biology. Nat. Biotechnol. 2007, 25, 1239–1249. [Google Scholar] [CrossRef] [PubMed]
  46. Deacon, T. Incomplete Nature. How Mind Emerged from Matter; W.W. Norton & Company: New York, NY, USA; London, UK, 2011. [Google Scholar]
  47. Deacon, T.; Koutroufinis, S. Complexity and Dynamical Depth. Information 2014, 5, 404–423. [Google Scholar] [CrossRef]
  48. Friston, K. Hierarchical models in the brain. PLoS Comput. Biol. 2008, 4, e1000211. [Google Scholar] [CrossRef] [PubMed]
  49. Hewitt, C.; Bishop, P.; Steiger, P. A Universal Modular ACTOR Formalism for Artificial Intelligence. In IJCAI-Proceedings of the 3rd International Joint Conference on Artificial Intelligence, Standford, CA, USA, August 1973; Nilsson, N.J., Ed.; William Kaufmann: Stanford, CA, USA; pp. 235–245.
  50. Hewitt, C. Actor Model for Discretionary, Adaptive Concurrency. CoRR. 2010. abs/1008.1. Available online: http://arxiv.org/abs/1008.1459 (accessed on 8 December 2015).
  51. Fitch, W.T. Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition. Phys. Life Rev. 2014, 11, 329–364. [Google Scholar] [CrossRef] [PubMed]
  52. Dennett, D. The Software/Wetware Distinction: Comment on “Unifying approaches from cognitive neuroscience and comparative cognition” by W Tecumseh Fitch. Phys. Life Rev. 2014, 11, 367–368. [Google Scholar] [CrossRef] [PubMed]
  53. Burgin, M.; Dodig-Crnkovic, G. A Taxonomy of Computation and Information Architecture. In Proceedings of the 2015 European Conference on Software Architecture Workshops (ECSAW’15), Dubrovnik/Cavtat, Croatia, 7–11 September 2015; Galster, N., Ed.; ACM Press: New York, NY, USA, 2015. [Google Scholar]
  54. Shields, C. Aristotle’s Psychology. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; 2015; Available online: http://plato.stanford.edu/archives/spr2015/entries/aristotle-psychology/ (accessed on 8 December 2015).
  55. Aristotle. On the Soul. (De Anima.). Available online: http://classics.mit.edu/Aristotle/soul.html (accessed on 8 December 2015).
  56. Maturana, H.; Varela, F. Autopoiesis and Cognition: The Realization of the Living; D. Reidel Pub. Co.: Dordrecht Holland, The Netherlands, 1980. [Google Scholar]
  57. Dodig-Crnkovic, G. Modeling Life as Cognitive Info-Computation. In Computability in Europe 2014; LNCS; Beckmann, A., Csuhaj-Varjú, E., Meer, K., Eds.; Springer: Berlin, Germany; Heidelberg, Germany, 2014; pp. 153–162. [Google Scholar]
  58. Sloman, A.; Chrisley, R. Virtual machines and consciousness. J. Conscious. Stud. 2003, 10, 113–172. [Google Scholar]
  59. Blanshard, B. The Nature of Mind. J. Philos. 1941, 38, 207–216. [Google Scholar] [CrossRef]
  60. Rocha, L.M. Selected Self-Organization And the Semiotics of Evolutionary Systems. In Evolutionary Systems: Biological and Epistemological Perspectives on Selection and Self-Organization; Salthe, S., van de Vijver, G., Delpos, M., Eds.; Kluwer Academic Publishers: Dodrecht, The Netherlands, 1998; pp. 341–358. [Google Scholar]
  61. Basti, G.; Perrone, A. On the cognitive function of deterministic chaos in neural networks. In Proceedings of the IEEE International Conference on Neural Networks, Washington, DC, USA, 18–22 June 1989; Volume I, pp. 657–663.
  62. Bateson, G. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology; Adriaans, P., van Benthem, J., Eds.; University of Chicago Press: Amsterdam, The Netherlands, 1972. [Google Scholar]
  63. Ghosh, S.; Aswani, K.; Singh, S.; Sahu, S.; Fujita, D.; Bandyopadhyay, A. Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System. Information 2014, 5, 28–100. [Google Scholar] [CrossRef]
  64. Ehresmann, A.C. MENS, an Info-Computational Model for (Neuro-)cognitive Systems Capable of Creativity. Entropy 2012, 14, 1703–1716. [Google Scholar] [CrossRef]
  65. Ehresmann, A.C. A Mathematical Model for Info-computationalism. Constr. Found. 2014, 9, 235–237. [Google Scholar]

Share and Cite

MDPI and ACS Style

Dodig-Crnkovic, G. The Architecture of Mind as a Network of Networks of Natural Computational Processes. Philosophies 2016, 1, 111-125. https://doi.org/10.3390/philosophies1010111

AMA Style

Dodig-Crnkovic G. The Architecture of Mind as a Network of Networks of Natural Computational Processes. Philosophies. 2016; 1(1):111-125. https://doi.org/10.3390/philosophies1010111

Chicago/Turabian Style

Dodig-Crnkovic, Gordana. 2016. "The Architecture of Mind as a Network of Networks of Natural Computational Processes" Philosophies 1, no. 1: 111-125. https://doi.org/10.3390/philosophies1010111

Article Metrics

Back to TopTop