Next Article in Journal / Special Issue
Information Science: Its Past, Present and Future
Previous Article in Journal / Special Issue
Dynamics of Information as Natural Computation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Concept of Information as a Bridge between Mind and Brain

Akita International University, 193-2 Okutsubakidai, Yuwa, Akita-shi, 010-1211 Akita, Japan
Information 2011, 2(3), 478-509; https://doi.org/10.3390/info2030478
Submission received: 16 May 2011 / Revised: 26 July 2011 / Accepted: 3 August 2011 / Published: 16 August 2011
(This article belongs to the Special Issue Selected Papers from "FIS 2010 Beijing")

Abstract

: The article is focused on the special role of the concept of information understood in terms of the one-many categorical opposition in building a bridge between mind and brain. This particular choice of the definition of information allows unification of the main two manifestations of information implicitly present in literature, the selective and the structural. It is shown that the concept of information formulated this way together with the concept of information integration can be used to explain the unity of conscious experience, and furthermore to resolve several fundamental problems such as understanding the experiential aspect of consciousness without getting into homunculus fallacy, defending free will from mechanistic determinism, and explaining symbolic representation and aesthetical experience. The dual character of selective and structural manifestations opens the way between the orthodox information scientific description of the brain in terms of the former, and description of mind in terms of the latter.

1. Introduction

The question “What is information?” is misleading. It suggests that the word “information” has some unique meaning waiting to be discovered, and our task is to find it. There is no single tradition in the use of this term. Moreover, it is even not clear whether single concept of information can be used in all contexts in which this word appears. Thus, the issue is not which particular definition is right or wrong, but how to construct a concept which is logically sound (not many of the variety found in literature satisfy this necessary condition) and which gives us a concept of sufficient value for our comprehension of the world to be worth attention. Certainly, definitions which unify some earlier concepts into one more general without trivialization or over-extension are of special interest, if we can find evidence that this unification is a reflection of the actual interdependence of denotations. Of course, it is possible, that there exists some entity or phenomenon which is manifested in multiple ways which we associate with information in the variety of contexts, but its discovery requires the use of analytical tools and clearly defined concepts within more specific domains.

Information is not the only concept whose definitions have been competing for long time. For instance, the concept of culture has been discussed much longer. In 1952, Alfred L. Kroeber and Clyde K.M. Kluckhohn had already summarized in a critical review 164 earlier definitions, of course, adding their own [1]. Arthur Lovejoy in 1927 studied 66 ways in which the word “nature” has been understood in the context of aesthetics [2]. In comparison to those concepts, meaning may seem quite uniform in understanding with only 16 of its different meanings studied in the classical book of Charles Kay Ogden and Ivor Armstrong Richards “The Meaning of Meaning” published in 1923 [3]. Different definitions do not always compete in establishing the meaning of a term. Sometimes, they simply clarify the existing distinctions between concepts which for some historical reasons adopted the same word.

Criteria for evaluation of the concepts of information may depend on the philosophical position regarding the meaning of the “world” and on the expectations regarding their functions. However, some criteria are quite obvious, without any need to go beyond common sense, or at least beyond common intellectual practice.

For instance, it is natural to expect that the concept of information should allow us to build connections with a broad range of concepts and ideas present in philosophical tradition and with scientific theories of relevant phenomena. Also, the value of the concept can be confirmed by the development of a formalism allowing its theoretical study.

Here is a point of special importance for the study of information due to the historical circumstances surrounding its early development. The interest in information as the central concept of a new discipline was stimulated by Claude Shannon's study of communication in which he introduced entropy as a measure of some magnitude characterizing process of generation, transmission, and retrieval of messages [4]. Association of the messages with information caused that entropy started to be considered a measure of information. Originally Shannon was not so clear about what actually is being measured, but information was one of his answers, and he even considered at some time the term “information” instead of “entropy” for the quantitative characteristic of transmission, choosing the latter on advice from John von Neumann.

Conviction that something is being measured by entropy, and therefore something must exist has been a very common since then. For many practically oriented users of, or contributors to Shannon's information theory the existence of what was declared as a measure of information served as a substitute for the definition. Information is simply that which is measured by entropy. But, how can we be sure that entropy measures anything?

Entropy as defined by Shannon (as well as almost all generalizations which followed this definition) is a functional which assigns some non-negative real number to probability distributions. In the case of infinite probability spaces, some distributions have entropy not defined (“infinite”). From the mathematical point of view entropy is not a measure at all. It gives some quantitative characteristic of probability distributions. Moreover, it is clearly different from its sister concept in physics, as in the absence of Boltzmann's constant it does not have physical dimension. So it does not measure any physical magnitude either.

Thus, there is a long way from Shannon's entropy to measuring something which would have a clear ontological status. Even if we accept the view that entropy actually measures something what we choose to call information, we have to accept that consequently information is described by a probability distribution. Then, there is a legitimate question whether every probability distribution is associated with information (as for every probability distribution we can find its entropy, possibly infinite), or we have to identify some domain of application for probability theory, and only within this domain probability distributions describe instances of some entity or relationship which can be identified as information.

We have to remember that in Shannon's original study the concept of information does not have any formal representation. It appears only in the interpretation of the formalism, but is completely absent in its formal conceptual framework focusing on probability distributions describing selection of characters for messages. Nothing would happen if we removed the term “information” from Shannon's paper.

Without diminishing the immense importance of Shannon's work on the mathematical theory of communication for the study of information, the arguments of Yehoshua Bar-Hillel that it is rather “theory of signal transmission” than “information theory” seem convincing, if theory of some concept is supposed to provide description and explanation of its properties [5]. Very likely, concepts such as entropy, its generalizations or alternations may serve as an important measure of information, if only we can formulate clearly the concept of information in such a way that it can be associated with probability distribution. But thus far we have an extensive body of literature on “mathematical theory of entropy” rather, than on “information theory”.

There are many important and urgent tasks for the formal theory of information of this type. For instance, significantly different probability distributions may have the same entropy. This suggests that information may be characterized quantitatively by entropy only to some degree. Are there any other complementary quantitative characteristics of information? What exactly is the relationship between entropy as a measure of information and physical entropy? Or alternatively, what is the role of Boltzmann's constant, more exactly its physical dimension, in the transition from one to the other?

The list of such questions is long, but they do not belong to the subject of this paper. Here, we will assume that the need for a connection between the studies of information and physics is clear enough. More problematic is the connection with that what constitutes our experience of being a human subject. As long as we are concerned with the physical appearance of human individual expressed in terms of biology or physiology, there is no need to go beyond methodology of information science as developed in the interaction with physics, although it may be not sufficient to understand complex systems such as an organism or a cell. The situation becomes hopeless when we try to stay within the limits of old paradigm of research in the study of what constitutes mind and how mind is related to its bodily expression—brain.

The main objective of the present paper is to show that the concept of information formulated by the author in earlier articles can serve as a bridge between mind and brain. In the opinion of the author, it is an argument validating this particular choice of the definition. However, there will be no explicit use here of the formalism developed by the author for the purpose of theoretical studies of the concept, therefore most of the article can apply to any approach to information which considers its structural manifestation and includes its level of integration as a fundamental characteristic.

The main motivation for this article is in the author's belief that, although the concept of information is a key to understanding and studying mind-brain relationship, the earlier directions of research on the mechanisms in the brain responsible for consciousness, cognition, sense of self did not adequately identify the phenomenal elements of our human experience which should be explained in terms of information. Without such identification, the attempts to model functions of the brain have been detracted from the question “What type of mechanisms can explain our phenomenal experience of consciousness?” to one of the type “What mechanisms in the brain can be implemented or modeled using computers with one of several existing equivalent types of architecture?” After more than a half a century of research, the progress in understanding of the relationship between mind and brain is disappointing. What is causing it?

Although cognitive functions of the brain seem to fit very well the orthodox analytic analysis of thinking in the linguistic form, even in this domain there is a faulty assumption that they can be modeled exclusively using current computer architecture (basically of Turing type) and that the best material for the analysis of thinking is in linguistic logic or in the computation based on it.

It can be easily observed that formal logic has been developed in only one civilization (Indian logic of Nyaya is only superficially similar, but actually focused on causal relationships between events, not on formal relationships between elements of symbolic representation of reality [6]). It would have been surprising, why such fundamental function of the brain as logical thinking has been developed in only one cultural formation.

Even in the discipline closest to logic, mathematics, the role of logic is not as obvious, as introductory texts maintain. In mathematical practice, logic is not directing or regulating the way of thinking, but providing criteria for valid reasoning in articulated form. Mathematicians are rarely using logical rules as a method of invention, and frequently heuristic methods may be quite irrational (i.e., logical process of inference does not describe the actual cognitive process), but rather as a tool for articulating and proving already identified results. And this identification of the results, sometimes of the great level of complication may occur in a fraction of a second, as documented in (then) controversial, but now classic book by Jacques Hadamard on the psychology of mathematical invention [7].

Computation, even when understood as a process performed by a human computer, as in the famous paper of Alan Turing [8], is not much more natural for the human brain than logic. Robert Recorde had to write an extensive defense of “the art and use of arithmetick with pen” in the mid 16th century to popularize the exotic method of doing things with a pen, when everyone else was using counters [9]. How much can we learn about the mind through studies of processes so foreign for Europeans less than five hundred years ago?

In this paper, the focus is on the aspects of human mind which are different from those usually considered in attempts to develop its model. Thus, not logic or computation, but the ability to integrate information is here of primary interest.

In psychological studies of the way people perceive the mind, two main dimensions have been distinguished, dimension of experience and of agency [10]. Although experience and agency may be interpreted in various ways, the former can be related to the capacity to experience the existence of some entities (including self) and their states (of being red, or being hungry), the latter to the capacity to act on the content of experience (here too, including experience of self) associated with the ability to establish some goals to be achieved through the action. It seems that thus far the search for the mechanisms responsible for the mind's functions have been biased towards the latter dimension. This is not a surprise, considering the decades of domination of the behavioral approach. However, without balancing the study by considering both dimensions we cannot expect any progress in this domain.

For our present task, it is important to recognize that both dimensions require some form of information integration. There is no possibility to experience existence of anything without being aware of its integrity, i.e., without integrating information constituting the object of our experience. But agency requires some form of integration, too. First, we have to act on something, and we have to assume that the object of our action will retain its integrity, although it may, or should change its state as a result of our action. An additional integrating factor necessary in this dimension is interaction relating the subject and object of the action.

2. Concepts of Information and Information Integration

Although the formalisms of information and information integration will not be used directly in the presentation of main theses of this article, it is necessary to provide an outline of these two concepts. For both concepts there are possible mathematical models which allow their theoretical analysis. But here we are concerned not with the study of information, but with justification of the choice of its particular conceptualization. Once this choice is outlined, following sections of the paper will focus on the explanation how the use of these concepts may help in answering questions regarding the connections between mind and brain.

2.1. Information and Its Dual Manifestations

Since more extensive presentation of the definition of information used in this paper can be found elsewhere [11], a brief outline of the concept will be sufficient here. Thus, information is understood in this paper as an identification of the variety, i.e., that which makes one out of the many. It presupposes some variety (many) which can be identified as a carrier of information, and some form of unity (one) which is predicated of this variety. Since the relationship (opposition) of one to many is relative, so is the concept of information understood this way.

There are two most basic ways the many can be made one, by a selection of one out of many (selective manifestation of information) or by a structure introduced in the many which unites it into a whole (structural manifestation of information). These are two complementary manifestations of information for different varieties or information carriers. These are not separate types of information, as each of them requires the presence of the other.. If the elements of the variety are devoid any structure, it is difficult to expect any information in the selection of one of them. On the other hand, every particular structure imposed on the elements of the variety can be considered an outcome of the selection of one of a variety of possible structures.

As a consequence of this understanding of information, it has two main characteristics. One is quantitative referring to the selective manifestation, the measure of information reflecting the size of the variety and the level of determination of the selection, for instance using entropy, or to be consistent with the definition rather the alternative, but closely related measure introduced by the author [12]. The other is qualitative (but possibly admitting a quantitative form) referring to the structural manifestation, a level of information integration which reflects the mutual interdependence of the elements of a variety [13].

Since the orthodox study of information initiated by Claude Shannon can be easily interpreted as a study of the selective manifestation of information, there is no risk that the specific understanding of the concept of information in the present paper can cause any confusion, or that it can make the views presented here difficult to comprehend. On the other hand, the fact that information is understood by the author in terms of the one-many opposition helps to understand why for the author the unity of conscious experience qualifies consciousness as an information phenomenon.

2.2. Information Integration

The approach to consciousness and cognitive functions of the brain presented here is heavily dependent on the concept of information integration, which seems not having antecedents in literature of information studies preceding publications of the author, who believes that the negligence of this aspect of information should be blamed for the disappointing effects of the domination of the research paradigm guided by the computer metaphor of the brain.

The author of the present paper has proposed a few models (with the increasing level of generality) of the process of information integration [1315]. The earlier version of the integrating unit had its formalism similar to the structure used in quantum mechanical formalism based on the concept of orthomodular lattice, called quantum logic.

The motivation for such choice came from the fact that quantum mechanical systems have been considered as potential candidates for the description of consciousness for long time. The difficulty in finding mechanisms known to classical physics exhibiting such a form of unity as in the phenomenal experience of consciousness has directed attention of many researchers to quantum mechanics. In the formalism of quantum mechanics the unity of complex systems is expressed as a superposition principle combining their component states into a whole. A purely quantum mechanical system allows unrestricted superposition.

However, the actual physical systems typically involve superselection rules restricting superposition. It is reflected in the lattice theoretic formalism in the distinction of the center of quantum logic (i.e., of the lattice of closed subsets of a Hilbert space on which states are defined) which retains classical characteristics formally expressed as the lattice theoretic distributivity property, and purely quantum sectors of coherence with unrestricted superposition, each corresponding to one (minimal non-zero) element of the center.

From the algebraic point of view, quantum logic of a physical system exhibiting both classical characteristics (such as mass) which introduce superselection rules and quantum ones is a direct product of its purely quantum coherent sectors in which elements of the center play the role of indices. A purely quantum system has a trivial two-element center and is simply direct-product irreducible. A purely classical system can be reduced completely to the direct product of trivial two-element coherent sectors. In spite of the name suggesting physical interpretation, purely quantum, i.e., irreducible or indecomposable structures can be found in many mathematical theories unrelated to physics, such as projective geometry [16], therefore the association with physics or quantum mechanics is only heuristic and does not limit where the concept of information is applied.

The approach proposed by the present author differs from those other searching for the explanation of the unity of consciousness in quantum mechanics in the absence of the assumption that the brain or its regions responsible for consciousness are actually quantum-mechanical objects. Instead, it is postulated that the structure of possible states of the system implementing information integration consists of a lattice which has nontrivial coherent components. Thus our sense of unity in conscious experience is a reflection of the irreducibility into a product of independent components.

The consecutive models of information integration proposed by the author went beyond the analogy to quantum logic [13]. More adequate and more general formalism can be based on the concept of a closure space, i.e., a set equipped with the general closure operator generalizing the familiar concept of topological closure. Connection with the earlier approach is through the complete lattice of closed subsets, which in general does not have to have orthocomplementation defined, but still can be understood as a logic of a given information system. The use of the term “logic” in this context is different, or, more exactly, more general than in the context of language analysis, but is definitely closer to the original meaning. At this level of generality, the absence of orthocomplementation (more generally of an involution on this lattice), which can be identified with negation of traditional or quantum logic, is significant, as its presence is a highly restricting condition. The level of irreducibility of this logic of information (i.e., lattice of closed subsets) is a reflection of the level of information integration.

We can expect that the mechanisms of information integration in the brain allow for a changing level of integration, i.e., that the underlying structure is variable. This is reflected in mind as an ability to focus attention on the parts of the field of perception and to distinguish particular objects, each corresponding to a sector of coherence in overall structure. This corresponds in the mathematical formalism to the variation of the structure describing information integration between different levels of irreducibilty.

The main objective of this paper is to provide a new perspective on the fundamental aspects of human consciousness relevant for the question about the mechanisms of human thinking. This new perspective is based on the model of consciousness utilizing information integration as a fundamental process. The task to comprehensively discuss all three constituents of human consciousness, cognition, conation, and affection is too extensive to be carried in this paper, especially because the approach utilizing the concept of information integration applies to each of them in different degree.

For instance, the primary functions of the lower level affective mechanisms are discrimination of stimuli and preparation, selection and execution of responses. The higher functions include control and motivations of actions, which also involves discrimination. On the other hand it is very likely that, through the transition from emotions to feelings which affective mechanisms are using to shortcut cognitive processes in selection, some form of information integration is involved.

We have now in our disposition two fundamental concepts of information and information integration with theoretical tools for their analysis. Now, we can turn to the main objective of this article to provide arguments that they can give us tools for the discussion of the relationship between mind and brain. In particular, it is important to justify the claim that information integration is critical for this purpose. After all, information integration by a cognitive system did not even make it to the famous list of hard problems of consciousness compiled by David Chalmers, who placed it as a second item on the list (updated for the 2007 Blackwell compendium on consciousness) of easy problems apparently not worth his attention, as it is difficult to find its solution in his writings [17].

3. The Unity of Conscious Experience

3.1. Self and Universe

The unity of conscious experience can be considered from at least two perspectives. One is related to the question of the content of consciousness, the other of its form, or phenomenal appearance. In the first, the point of departure is in the observation that what we experience has necessarily dual character expressed in the opposition between one and many. We experience a large variety of stimuli, so at first we may conclude that the content of our consciousness should be rather characterized as a variety. However, no matter how large is the variety of stimuli, their conscious experience consists in a process of integration into a whole called “consciousness”.

This process goes beyond what in common discourse would be called recognition of existing patterns. For instance, we see in the clouds “nonexistent” objects (already recognized by Aristotle in his Meteorology or by Pliny in Natural History), pictures of gods on surfaces of split rocks (as recorded by Pliny in the story of the agate of Pyrrhus), etc. These, and other “chance images” have been not only described and studied for long time in some cultures, but their special instance have become a tool in the modern psychological practice as a diagnostic test known as Rorschach ink blots. Thus, the process of structuring of the variety of perceptions leading to their integration is clearly carried out by the conscious subject. Sometimes it can be interpreted as extracting or drawing from the content of subconsciousness. It is a matter of philosophical reflection to what extent the process corresponds to the independent from our consciousness external divisions of the reality into unified units, but this will not be discussed here.

We integrate our perceptions into objects, but integration does not stop at this point. The objects of our experience form another variety which is integrated into the one reality, the universe. The integration has a dual character expressed in spatio-temporal relations. There is one universe which changes without losing its unity and identity.

Our experience resulting from communication (supported by mirror cells in our brain, according to recent neuro-psychological research) suggests that there are other conscious subjects. This leads us to the expectation that the common part of consciousness content is in some correspondence to an “objective reality”, which by its very definition is unique. At this point, philosophical reflection may diverge from our individual conscious experience postulating that the common part of conscious experience of all conscious subjects corresponds to the part of an independent larger whole which is the actual unique reality.

The division between common and private components of consciousness is leading to another duality, if the common part is considered a reflection of an independent reality, the duality of internal or subjective self/mind and corresponding to it part of external, objective reality. We have an intuitive association of the location of self within that which we experience as our body, and although historically and culturally more exact location has not been unique, our modern view is that it is our brain that is responsible for our conscious experience, at least as a mediator between self and body.

Psychological studies of the so called “out-of-body” experience suggest that the intuitive identification of self with the body is not necessary, and that about 5% of people experience temporary subjective displacement of self. Recent research identifies temporoparietal junction (TPJ) in the brain as responsible for generating subjective location of self in the body based on reported out-of-body experience induced by stimulation of this region [18]. Also, the degree in which self is identified with the individual body is a matter of cultural variation which led Hazel Rose Markus and Shinobu Kitayama [19] to the distinction of the two different construals of the self called “independent” and “interdependent” dominating in different cultures. Since it is a study in cross-cultural psychology the approach of these authors is naturally reversed, as they assume priority of the individual existence of human subjects, who construe their self at higher or lower level of personal independence.

In the perspective of the present article, within the experience of consciousness the division is made into self and the external reality which includes possible other selves. However, it does not change the fact that the identification of the self with the body and the distinction of other selves are subject to cultural variation. The point is that the distinction of self and its opposition to the “external” world may be considered an argument against the unity of conscious experience. However, the relationship between the self and external reality is of uniting character. The self cannot exist without external reality, although philosophical reflection makes it possible the other way around.

We can go further in asking about the origin of the self. At first we could think about self as an expression of the unity or organization of the human organism. It would be a control system of the organism. However, most of the functions of the organism are unconscious, and lost control of the organism does not mean extinction of the self, as in the case of severely paralyzed individuals. When we go beyond that which can be confirmed by our experience into the sphere of potentiality, many people believe in the possibility of existence of the self after destruction of the body. This does not prove anything, but only shows that we can imagine survival of the self without the body. It is more likely that the self is not just an outcome of the integrity of the organism, that the experience of self is a result of the experience of unity of conscious experience. How can we explain the unity of what we perceive as reality, if there is nothing which could make the variety of perceptions (or variety of real objects) united?

Thus, self is a necessary condition of our perception of the experienced variety in a unified way. In other words, we can say that the self is just an external, objectified view of our consciousness understood as a process integrating our perceptions. It is necessary, as without the self we cannot integrate our conscious experience of being conscious into the united consciousness. Thus, the self is a self-reference to the experience of being conscious integrating it into the unity of our consciousness. Furthermore, without the self, there is no compelling reason for experiencing the unity of the reality.

This external view of consciousness expressed as self brings us to the second perspective on the unity consciousness from the point of view of the form of conscious experience. This unity is just the unity of self. Here it can be pointed out that in psychology there are known cases of individuals who experience differences in their personalities. However, it does not mean that such a person has the experience of two different selves at the same time. It is probable that everyone experiences changes in mood, attitude, motivation. It is possible to experience difficulty in identification of the present state of mind with that of yesterday. We can experience tension between conflicting elements of conation. Finally, we can experience delusions which substitute in our consciousness for the parts which normally belong to the common (“objective”) part of conscious experience. But it does not mean that consciousness is not an integrative process anymore, although this process can be quite different from typical.

3.2. The Unity of Consciousness in Philosophical Tradition

Thus, the unity of the content and the form of conscious experience are just two ways of looking at the same unity of consciousness, which has already been recognized in the earliest attempts to understand our human experience carried out in religious and philosophical studies. Such a recognition has been an outcome of the reflection on the most striking opposition in our experience of reality. The opposition of one-and-many has preoccupied religious and philosophical thinkers for millennia. In Western civilization we can find reflection on one-and-many and an expression of the view of integrating character of thought as early as in surviving fragments of works of pre-Socratic philosophers.

Sextus Empiricus in Against the Mathematicians (IX, 144) quotes Xenophanes saying “If the divine exists, it is a living thing; if it is a living thing, it sees—for he sees as a whole, he thinks as a whole, he hears as a whole. If it sees, it sees both white things and black” [20]. More generally, pre-Socratic philosophers were seeking wisdom in identifying the unity in the world. For instance, Aristotle in On the World (396b20–25) refers to what “…was said by Heraclitus the Obscure: Combinations—wholes and not wholes, concurring differing, concordant discordant, from all things one and from one all things. In this way the structure of the universe—I mean, of the heavens and the earth and the whole world—was arranged by one harmony through the blending of the most contrary principles” [21]. The word cosmos in English comes from its Greek predecessor meaning the subject of our experience organized into a harmonious whole.

More extreme was the view of Parmenides that the necessary condition of being is unity, expressed explicitly by his followers Melissus and Zeno of Elei in the reasoning that since the variety is an illusion and what actually exists can be only one, the recognition of the unity is the only form of actual comprehension.

In the philosophy of Aristotle, the issue of integration of information can be identified in his analysis in “De Anima” of “common sensibles” (in modern psychological terminology “binding”.) He excluded existence of the sixth sense, but considered the common sense a faculty by which common sensibles are perceived together as a single object.

Outside of European intellectual tradition, it is easy to find the analogy of the reasoning presented above about the relationship between the unity of the self, the unity of consciousness and of what we understand as the reality to the identification of the atman (the essence of self) and brahman (the essence of the reality) in the Vedaistic tradition of advaita (non-duality), introduced as a way to resolve the opposition of one-and-many inherent in our comprehension of the world.

This view has not always been expressed as an open declaration of the unity of our conscious experience or an integrative power of the intellect. Sometimes thinkers stopped short of it when expressing the view that comprehension of the reality is impossible without such an integrative power, since the unity is the most fundamental characteristic of what actually exists. This culturally universal view might have been maintained in spite of the objection that the recognition of such unity is impossible to articulate in the language and requires some form of direct experience transcending the limitations of linguistic expression. Thus, Tao, the way in Taoism (as well as in several other philosophical and religious systems of the Far East) is a metaphorical presentation of the unity of true reality in spatiotemporal terms, expressed in words is not a true Tao. It is easy to understand this objection considering that every use of language is leading to analysis which contradicts synthesis intended as a way of understanding. The importance of the unity in Taoistic philosophy is not just a result of our modern interpretation of the metaphor of the way, as can be seen in the explicit references to the One by Lao Tzu in Tao-te Ching (e.g., Chapter 39) [22].

Yet another example of identification of wisdom with the ability to synthesize or integrate can be found in the beginning of Gilgamesh “[Of him who] found out all things, I shall tell the land, [of him who] experienced everything, [I shell teach] the whole. He searched […] lands […] everywhere. He, who experienced the whole gained complete wisdom” [23].

In mediaeval Europe, the search for wisdom in integration of a variety to the unity entered Christianity in the works of St. Augustine who for instance in “De Ordine” wrote “If a man […] reduces to a simple, true and certain unity all the things that are scattered far and wide throughout so many branches of study, then he is most deserving of the attribute learned. […] Therefore, both in analyzing and in synthesizing, it is ones that I seek, it is oneness that I love. But when I analyze, I seek a homogeneous unit; and when I synthesize, I look for an integral unit. […] In order that stone be a stone, all its parts and its entire nature have been consolidated into one” [24].

The task of pursuing in religion and philosophy all types of the views on the integrative character of consciousness or comprehension is too extensive to be carried here beyond a short selection.

Regarding more recent philosophical reflection at the time when scientific approach to study of consciousness has been developing it will be enough to recall a special term of “apprehension” introduced into German philosophy by Georg Friedrich Leibniz expressing the fundamental feature of the mind consisting of the fact that mental experience forms a unity which involves a constructive activity, not as in Locke's tradition a passive reflection of external events. In this sense this term was used by Immanuel Kant, for whom the act of transcendental apperception was a condition for the empirically observed unity of experience. Johann Friedrich Herbart made it a central concept of pedagogy describing the process in which new ideas are being assimilated by an existing united complex of ideas, while Wilhelm Max Wundt based on it his concept of experimental psychology focused on the observable characteristics of apprehension.

The issue of integrative character of cognitive functions has had another reflection in the formation of the concept of human intelligence. For instance, J. Peterson in the famous discussion from the 1920s initiating its modern study defined it “[Intelligence is] a biological mechanism by which the effects of a complexity of stimuli are brought together and given a somewhat unified effect in behavior” [25].

In the writings of William James we can find both a quite extensive historical account of the one-and-many relation in the intellectual history of humanity [26], and the assertion that the unity of the stream of consciousness is its most fundamental characteristic [27].

In the next decades the unity of consciousness has had periods of waxing and waning interest in psychological research. One of the high tides came with the development of the Gestalt psychology. The study of neural networks and the model of distributed functions of the brain in cognitive processes have brought low tides. Then again, the split brain experiments suggesting hemispheric lateralization of analytic and synthetic thinking and the controversies regarding the duality and precedence of global/local cognition revitalized the issue of the unity. Of course, nobody objected to the fact of the unity of consciousness which James considered so fundamental. The matter was just whether this unity can help us in understanding the process of thinking or of the consciousness.

3.3. Computational Mind

If we wanted to identify a single event that obscured the issue of unity of consciousness the most and that inhibited understanding of the role of information integration in the brain, there would be definitely and paradoxally (due to their great importance for the development of the study of information) two main candidates: the inventions of electronic communication and of computer. Although the former was without doubt the main reason for the interest in the concept of information, it was a source of the so called “conduit metaphor of information” which perpetuated the understanding of information in terms of the input, channel of transmission and output. The latter produced the “computer metaphor of the brain” of similar components, but with the channel of transmission replaced by the processing unit or units. In the study of information, it caused the shift of attention from the information itself to the vehicle carrying it, inhibiting progress in the studies of the meaning of information or its integration.

In the study of the brain, the computer metaphor caused that its work has been considered exclusively in terms of passive transformation programmed from the outside and intended for some external “client”. No essential change, for instance change in the level of integration has been considered, as the output of the computer is not qualitatively different from the input, not taking into account impressive for the layman, but marginal effects created by interfaces in the peripheral devices.

The question of whether it is possible to construct a machine equipped with artificial intelligence allowing it to think, when asked sixty years ago by Alan Turing [28], in the times of early advances in computational technology seemed easier to answer than now. However, Turing was aware of the difficulty in understanding his initial question “Can machines think?” but considered it “to be too meaningless to deserve discussion” and in order to avoid entering explanations what thinking is, he developed his “imitation game” known now as Turing's test. However, he expressed his optimistic opinion that “at the end of the century […] one will be able to speak of machines thinking without expecting to be contradicted” and supplied multiple arguments supporting his belief in the positive answer to the main question.

Turing's anticipation of the storage capacity of computers exceeding a gigabyte by the turn of the century turned out to be correct, but machines still are not able to think, no matter what criteria are used for testing. His imitation game has met with many objections that it cannot tell us much about cognitive abilities of machines, stimulating arguments of the type of John Searl's Chinese Room thought experiment. Turing and many others based their hope for implementation of the artificial intelligence on the development of computer technology with its architecture in which processing information is carried out by the central processing unit operating on the content of computer memory.

An alternative, equally old, was a system designed specifically as a model of the brain without any central processing unit, but in the form of a network of simplified neurons. Warren McCulloh and Walter Pitts showed that such a system can carry out all logical operations which seemed to be a necessary, if not sufficient condition for being an adequate model of the brain [29]. But the research program of neural networks which ignited greatest hopes for the implementation of cognitive abilities in the machine has faded away too. Neural networks could simulate many forms of brain activity, they could perform all operations carried out by a computer, they could learn, sometimes faster than the brain, but neither of them could be used to construct a thinking machine, nor could help to understand human consciousness.

The view of the author is that the responsibility for disappointing outcomes of the attempts to equip machines with the ability to think can be found in the faulty assumption expressed by Turing in his answer regarding machine's ability to think “But I do not think these mysteries [of consciousness] necessarily need to be solved before we can answer the question [Can machines think?] with which we are concerned in this paper” and perpetuated in several large-scale research programs through the decades [28]. It seems that the sixty years of experience show clearly that, without more comprehensive specification of the functions or mechanisms of the brain which we want to reproduce in the machine, every attempt to build a thinking machine is doomed.

The fact that computers can perform many intellectual functions of the human brain better or faster, and that their computational and therefore algorithmic abilities are at least theoretically not inferior to human abilities have brought up several weaker or stronger versions of Artificial Intelligence Hypothesis.

In the context of our interest in the unity of conscious experience the hypothesis could be formulated as follows. If the high level cognitive functions can be performed by a computer in which there is nothing which requires internal unity except the simple central processing unit (or even worse, an array of such units), and if the highly efficient learning by neural networks can have an emergent character (i.e., can be an outcome of a totally distributed process without any central processing unit), then maybe the introspective experience of unity of human mind is just a byproduct of the cognitive processes which does not reflect any actual characteristics of consciousness.

It is interesting that Turing expressed his view that the question whether machines can think is meaningless together with his belief that by the year 2000 there will be computers with a memory storage capacity of about a gigabyte (true) capable to simulate human thinking so well that a human interrogator will have about 30% chance to detect that they are not humans (false), in the same article in which he proposed what now is called the Turing test, the criterion of the capability to think by a machine based on the assessment of its responses given to a human interrogator as being given by a human. The assessment is supposed to be an outcome of an intuitive judgment involving without doubt the integration of information in the brain of human interrogator in the process of assessment. Thus, even in the case where thinking is identified with computing and detached from any kind of psychological unity, the integration of information enters through the criterion qualifying for thinking process.

Those who, like John Searl with his Chinese Room thought experiment or John Lucas with arguments referring to Gödel's Theorem, opposed Turing test as a criterion for consciousness have been brought into the battle ground established by the development of computers and computer science [30,31]. More and more often the models of human brain have been shaped by the architecture of computer hardware or functions of software. The computer paradigm has been present not only at the level of a metaphor.

The studies of computing abilities of the neuron networks initiated by McCulloch and Pitts in 1943 were originally based on a simplified model of neural cells (“point neurons”) and their connections [32,33]. Better knowledge of the morphology and physiology of neuron cells made enabled more accurate models of neural circuits within the same spirit of analogy to computer hardware. In the recent report Koch and Segev are analyzing functioning of single neurons as units implementing multiplication in the context of more general statement about this operation: “Multiplication is both the simplest and one of the most widespread of all nonlinear operations in the nervous system. Along with the closely related operations of ‘squaring’ and ‘correlation’, multiplication lies at the heart of models for the optomotor response of insects and motion perception in primates” [34].

Thus, we have a continuous convergence of the disciplines involved in the study of consciousness. This convergence is being expressed in the paradigm of the brain studies based on the analogy with computer architecture on one hand, and on the other hand in the analysis of information processing by computers simulating or implementing cognitive functions.

The speed of technological progress marked by the exponential growth of the memory storage capacity and of the number of operations performed by the central processing unit in one second has not been matched by the progress in artificial intelligence. If Turing could witness the progress of computer technology at the turn of the millennium, he would have been impressed by the work of engineers and programmers, but disappointed by “thinking machines”.

Recognition of such a disproportional progress induced quite popular view that the solution can be brought by the advantages of parallel computing. After all, there is no clearly defined central processing unit in the brain. Observations of the work of brain suggest a high level of distribution of information processing. The well known ability of the injured brain to shift specialized functions to different areas from the damaged centers occurring in the process of recovery apparently provides the evidence for such a distributed architecture. More convincing is the recently discovered fact that even highly specialized regions of the brain such as Wernicke's area (semantics and language production) and Broca's area (syntax and language understanding) participate in nonlinguistic cognitive functions, and that there are other areas in the brain which are involved in the functions originally ascribed to them [35].

Thus, it seems that not only there is no central processing unit, but not even more specialized “regional” processing units in the brain. It is easy to understand the increase of interest in distributed processing and the “agent” approach to computing under the influence of such discoveries, but does it help us to understand how the distribution of brain functions can be compatible with very clear sense of unity of consciousness?

3.4. Quantum Unity of Consciousness

Although the computer metaphor influenced the mainstream of research on consciousness, it did not inhibit attempts to explain the unity of conscious experience through phenomena which exhibited apparently similar characteristics, but which were observed at the different scale. Subatomic particles could be observed in the states in which they seemed to be inseparable, and each such particle can be characterized simultaneously by several different values of the same characteristic ascribed only up to some probability distribution. In quantum physics it has been described as an entanglement, and superposition of states, respectively. Quantum physical states have an additional curious property of sudden change (collapse) from one with the range of potential different values of observed magnitudes to another with their definite value when actual observation is made. Eugene Wigner proposed an interpretation of this collapse as an influence of the consciousness of the observer. This way a bridge has been built between the two disciplines of inquiry in physics and psychology, or at least between the interests of researchers from these disciplines.

In the early 1970s, several attempts were made to use quantum phenomena in the explanation of consciousness [3638]. The problem was that quantum mechanical description usually applies to physical systems of the size much smaller than that of any potential functional units of the brain. Quantum description of one neuron does not have much sense due to its relatively big size, so considering that the cognitive functions of the brain involved activation of hundreds, thousands or millions of neurons, the situation seemed hopeless. The hope was revived by the studies of the so called Bose-Einstein condensates which may exhibit quantum characteristics in volumes exceeding size of all human brain, but in conditions very different from those in the human organism [39].

Recognition of the special role of synapses (relatively small spaces at the point of contact between the axon playing a role of the output of one neuron and dendrite or soma of the next neuron) in the functioning of the nerve system and in particular in the cognitive processes, rekindled temporarily hope for quantum-mechanical description of consciousness [4042].

The only attempt since the 1970s which has enjoyed longer lasting resurrection was that proposed by Stuart Hameroff, in which prominent role was played not by neurons or synapses, but by the components of neurons (and other cells) called microtubules. It has acquired quite a high level of prominence in the 1990s through the very influential writings of Roger Penrose [43]. Penrose not only popularized the idea of Hameroff, but added to it more solid physical foundations, although based on the speculative concept of quantum gravity, developing what is now known as Hameroff-Penrose hypothesis [44]. This approach is basically going back to the paradigm of the calculation-based explanation of consciousness consistent with the computer metaphor of the brain, although in the distinction from the usual computer architecture, calculation is being carried by the functional units obeying rules of quantum physics, and therefore exhibiting some form of unity.

The candidate for the location of the unit of computation is in the form of two conformations of the tubulin dimers (polymers with two components) from which microtubules are build, which can switch into each other. Also, microtubules having many important functions within cells, some clearly related to information processes, can be viewed as ideal places where the computation could be carried out. But what exactly is happening within microtubules is not clear.

The importance of shifting computational functions from the neuron to its internal component, microtubule is in the scale of the latter which at least potentially admits the possibility of its functioning at the quantum mechanical level. If the state of the microtubule can be described in terms of quantum mechanics, we have a chance to get explanation of the unity of consciousness as a consequence of quantum effects. But having the chance of getting an explanation is not yet having an explanation or a research program to find it. Even such a great enthusiast of microtubules as Penrose has to admit that the quantum coherence of one microtubule is not enough to explain the unity of consciousness, so he postulates “Let us then accept the possibility that the totality of microtubules in the cytoskeletons of a large family of the neurons in our brains may well take part in global quantum coherence—or at least that there is sufficient quantum entanglement between the states of different microtubules across the brain—so that an overall classical description of the collective actions of these microtubules is not appropriate” [43].

The Hameroff-Penrose hypothetical mechanism has been criticized by several authors, physicists and neuro-psychologists for several reasons. Typically, because of the very short period of time in which a large enough portion of the brain could be in a coherent (i.e., maintaining superposition) state. One of the estimates predicts that decoherence is inevitable after only 10−13 fraction of a second [45]. Hameroff and his collaborators have been defending a much bigger number10−4 [46]. In either case, such short lasting coherence cannot be responsible for the phenomenal unity of consciousness, as we are not able to perceive events whose duration is even one hundred times longer than the most optimistic estimate.

Criticism of the Hameroff-Penrose hypothesis along with other attempts to use quantum-mechanical description of the brain to explain consciousness has cooled down enthusiasm among researchers working on the brain in looking for help in modern physics [47,48]. On the other hand, those on the side of physics have maintained the conviction expressed by Henry Stapp “There is, therefore, no place within the framework provided by classical physics for the idea that certain patterns of neuronal activity that cover large parts of the brain, and that have important functional properties, have any special or added quality of beingness that goes beyond their beingness as a simple aggregate of local entities. Yet an experienced thought is experienced as a whole thing. […] The human brain is, in effect, treated as a quantum measuring device” [49,50].

Continuing popularity of the idea of quantum mechanical explanation of consciousness among physicists can be explained by the great expectations regarding a new technological concept of the quantum computer, i.e., computer whose processing unit employs quantum superposition of states. The main obstruction in building it is the same as that encountered in the attempts to create a theoretical model of quantum-mechanical mechanisms in the brain. Conditions for maintaining quantum coherence of the processing unit for long enough time are difficult to implement not only in the environment of the brain, but also in the machine. Yet, if a quantum computer is finally constructed, maybe it can help to build a thinking machine. Of course, this belief becomes more convincing, if consciousness can be explained, or at least related to quantum-mechanical phenomena.

3.5. Consciousness as Integrated Information

Difficulties in utilizing quantum mechanics for theoretical explanation of the unity of consciousness did not influence studies carried out at the phenomenal level. Here the point of departure was in the subjective experience of unity, and its explanation was attempted in finding the correlation between the performance of cognitive functions by the brain and observed activation of its neurons, in spite of the warnings of physicists who objected the possibility that systems obeying rules of classical physics such as neurons could exhibit forms of unified existence. Gerald Edelman, who in his writings popularizing consciousness studies contributed to the continuing interest in classical theme of its unity, took as the point of departure the fact that “One extraordinary phenomenal feature of conscious experience is that normally it is all of a piece—it is unitary” [51]. In his approach the primary role is given to the quantitative measurement of neuronal integration understood as “the relating, correlation, or connection of signals to yield a unitary output”.

In the earlier works, Giulio Tononi and Gerald Edelman have directly identified integration of conscious experience with integration of information, understood by them as correlations between the firings of neurons responsible for conscious experience analyzed in terms of information theory [52,53]. They introduced a magnitude similar to mutual information (which they called a measure of integration) whose high values identified in a group of neurons a functional cluster, “[…] a single, integrated neural process that cannot be decomposed into independent or nearly independent components” [52].

Although this approach should be prized and welcome as the first quantitative analysis of information processing at the collective level of correlated activities of clusters of neurons, it is difficult to expect that it can help in developing a model of information integration. We are one step ahead of the earlier studies in which the distribution of the activity of neurons during some cognitive processes has been established. Now, we know that the distributed action of neuronal networks is correlated. But we still do not know what kind of structural characteristics of the neuronal networks are responsible for integration. Even more problematic is the issue whether the knowledge of the quantitative description of such correlations can be utilized as a sole means in the search for such structural characteristics of the brain.

More recently, Tononi wrote explicitly about the Information Integration Theory of Consciousness “It takes its start from phenomenology and, by making a critical use of thought experiments, argues that subjective experience is integrated information” [54]. However, it is not clear what he means by information and its integration. If information is simply identified with the firing of neurons and its integration consists in their observed time correlations without explanation of its mechanism, then all this theory is just an expression of the belief that the experienced unity of consciousness consists in simultaneity of neural activity. However, we do not have even evidence that what is observed as simultaneous in the brain is simultaneous in conscious experience. And of course, simultaneity of events does not tell us anything about their mutual relationship. For instance, the picture appearing on the screen of a TV set consists of a correlated, simultaneous “firing” of thousands of pixels, but its conscious experience takes place in the brain, not on the screen.

Explanation of consciousness and its unity requires formulation of some theoretical framework and its confrontation with empirical evidence. The evidence regarding some forms of information integration in the brain has been accumulated relatively slowly in the constant pattern of initial discovery based on weak evidence, contrary fragmentary evidence causing retreat, more comprehensive evidence confirming the original claim to the level of (occasionally) common acceptance.

This pattern can be seen for instance in the study of face recognition in which there has been a constant swing between the view that the face recognition is a completely distributed process without any significant contribution from integration of the local cues, and the opposite view that the face recognition is a purely global process which cannot be understood through the reduction to the recognition of face components.

More generally, the discussion of the global vs. local precedence in pattern recognition is related to the problem of information integration, since the global view of recognition requires some mechanisms of integration.

Another topic of neuro-psychological research is even more directly related to our theme, the case of “grandmother cells.” In the early 1970s, Horace Barlow published a paper in which he claimed that in conclusion from the experimental work it can be estimated that as few as about one thousand active neurons, called by him cardinal cells, can represent the whole of a visual scene [55].

The claim should not be so surprising considering the earlier studies of cat's brain by D.H. Hubel and T.N. Wiesel in which they reported that individual neurons fire only when lines of certain orientation are displayed in a specific location of the visual field have been commonly accepted [56]. However, Barlow's claim has been challenged many times, including a report of contrary evidence more than a decade later [57]. However, with passing time, more surprising reports about single neurons firing in response to known faces or places, but to nothing else, have introduced the term “grandmother cells” [58]. Your grandmother cell is firing if and only if you see the face of your grandmother [59]. Today, the local representation (as “grandmother cell” concept is called formally) is commonly accepted, although not by all neuroscientists [60].

Understanding of information in terms of one-many opposition and the concept of information integration presented in the preceding section provide the framework in which consciousness can be explained as integrated information. Although, the theoretical model with information integrating gates proposed by the author requires further study of the way how they could be implemented in the brain, at least it shows what we should be looking for.

4. Homunculus Fallacy

Thus far only one aspect of being human has been considered. That consisting in the possession of consciousness understood as the outcome of integration of information or integrated information itself. In this paper information is understood as that which makes many one, therefore these two formulations are equivalent. In the following, some consequences of such understanding will be discussed. It turns out that several old philosophical issues can be resolved.

The first of these consequences is elimination of the difficulties in understanding how consciousness is possible without falling into so called “homunculus fallacy”. Every time a model of human conscious awareness has been proposed, it was usually based on a hidden assumption that there is an ultimate creature (homunculus—little man) seeing what has entered our brain or soul, what has been transformed, processed, computed.

In the past, homunculi were introduced into the explanation of consciousness by the assumption that consciousness is a representation of external reality in the human body (brain) or soul (mind). The fact that the representation is formed does not explain why it is experienced in a conscious way. More recently, homunculi have been perpetuated by the computer metaphor of consciousness. The computer is a device which transforms input data into output which requires some “end user” or peripheral device. There is nothing in the output which was not in the input and their form is essentially identical. The difference is essential only for the “end user” (homunculus) for whom output is more useful than input.

Two and half millennia ago Democritus tried to explain human perception by the imprints of the big atoms entering through our eyes, which has been made on the surface formed by the extremely fine spherical atoms of our soul. Yet, if the soul has been just a collection of separately existing small atoms, there had to be someone who was supposed to see these imprints.

Not much progress has been made later. In medieval scholastic philosophy, the Aristotelian concept of substance consisting of the matter and form allowed to interpret this process in terms of information of matter. In writings of the scholastics we can find the view that perception consists in the transmission of the form of an object to the brain to inform the matter of brain (here, the word “information” appears in its original Latin form). But the matter of the brain informed in this process had to be perceived by someone, and the homunculus appears again. After all, the observed object, let's say a vase itself is an informed matter, but does not have conscious experience of its existence.

It might appear not to be such a big problem, as they could expect that the soul is watching the outcome of perception. But if the soul is an entity different from the body, the problem reappears in the question how the soul perceives the representation of the vase in the informed brain.

Rene Descartes believed that he resolved the difficulty by the distinction of res extensa and res cogitans with the two different modes of existence. The division has made philosophers busy in the following centuries, but did not help much in exorcising the ghost of homunculus who just moved into the different mode of existence. What is interesting, that in Cartesian thought we can trace the concept of integration, as the res cogitans by losing their spatial characteristics collapsed into spatially integrated wholes.

Deconstruction of the objects of our experience into ideas of different levels and gradual loading them into tabula rasa in the philosophy of John Locke seemed to eliminate the problem, but only superficially. Instead of Democritean atoms imprinting their shape in the fine atoms of human soul, we have here ideas imprinting themselves on our clean slate. It does not explain how these imprints are entering our consciousness. Locke says in “An Essay Concerning Human Understanding” (Book IV, Chapter iv, 3) that “it [mind] perceives nothing but its own ideas” (simple ones which entered as the result of experience and complex ones constructed from the former). It is nothing else, but invocation to a homunculus.

The German philosophical tradition of the 18th and 19th centuries emphasized the active integrating process of apprehension, but without providing any idea how it can be performed in the brain; it was still necessary to leave it for a homunculus.

In every attempt to develop a model of consciousness based on the computer metaphor the mechanisms responsible for the conscious experience have the form of an input-output device in which there is no qualitative difference between incoming and outgoing information. This always leads to more or less hidden presence of someone (homunculus) who would “read” the outcome. It is not a surprise that in the decades when the studies of consciousness were dominated by the interest in artificial intelligence with the more or less explicit assumption that its implementation would be based on a machine with the computer architecture, there was limited interest and even more limited progress in solving “homunculus fallacy.”

It was probably not until the end of the 20th century when some really innovative approach was proposed. The problem of the homunculus fallacy consists in the infinite regress (as we could already see before). If we have one such creature observing the representation of objects in the brain, then its brain requires the next. In his proposal of the solution, Daniel Dennett referred to Charles Darwin's “dangerous idea” of reversing the great chain of being [61]. It was actually not Darwin, but Jean-Baptiste de Lamarck, who introduced in the 1809 Dentu edition of his book “Philosophie zoologique, ou Exposition des considerations relatives a l'histoire naturelle des animaux” this revolutionary idea that it is not necessary that only being of higher level of complexity or perfection can create one of lower level, if there is enough time for it.

The traditional view already held by Plato and accepted without any reservations by virtually all later thinkers, in which its role as a dogma of Christian philosophy probably played some role, was implicitly based on the intuitively obvious principle which was articulated only recently by W. Ross Ashby as the Law of Requisite Variety “A model system or controller can only model or control something to the extent that it has sufficient internal variety to represent it” [62,63].

However, Lamarck proposed a process in which modeling or control has been replaced by evolution in which storage of the variety (information) was outside of the system, in the environment. What started to be called an adaptation to the environment can be simply interpreted as downloading information from the external source. Darwin made the next step, showing that the process of evolution does not require transmission of the acquired characteristics, but through natural selection can be carried out with those inherited. This process too can be interpreted as communication with external source of information, but not at the level of phenotype or individual organisms, but at the level of genotype, i.e., at the scale of populations, or more specifically of species.

Dennett did not involve time dimension in his proposal of the resolution, but employed the idea of the inverted hierarchy of homunculi, each next simpler than the preceding, claiming that this chain should be finite and leading to the lowest, simplest which can be similar to machine processing unit. And in this assumption Dennett's proposal shows weakness. It is difficult to expect that consciousness is a matter of gradual simplification of hierarchically structured layers of brain mechanisms, unless we consider some essential change in the characteristics of transmitted information. The decrease of the volume of what is processed by homunculi information (simplification of the structure results in lowering the variety and must require such decrease) itself cannot be considered responsible for a higher level of consciousness, otherwise we could expect that consciousness of a frog is more developed than that of a human.

Gerald Edelman (in the independently discussed model of consciousness) postulated difference in levels of information integration, and this is more likely [64]. But, Edelman did not provide any idea what type of information integration should be considered. Since there is no evidence for multilevel hierarchy of separate integration processes (although it is not impossible), at this point we can consider simply that we have some (unitary or multiple) process of information integration in the conscious experience, which of course requires some specification. Without presenting theory of integrating mechanisms or an explanation of how integration is understood, the statement that information integration is responsible for consciousness is meaningless.

The danger of the reappearance of homunculi in the study of consciousness can be identified even in the attempts to eliminate them. The approach of Tononi and Edelman, in which they analyzed integration of information in the brain by looking for the correlations of neuronal firings, was based on an assumption that the time coincidence of firings reflects integration of information constituting consciousness [65,66]. However, time of firings observed in such experiments may not correspond to temporal aspect of conscious experience, as was shown in the experiments performed by Benjamin Libet [67]. It can be interpreted as a homunculus (or his watch) reporting within conscious experience temporal relations outside it.

The crucial point in the model proposed by the author is in the fact that the input and output in the process of information have qualitatively or structurally different characteristics. Here, we have a clear difference from models based on the computer metaphor in which, of course, input and output are different, but there is no essential characteristic in which they differ. Output from one computer can always be used as an input into another. With increased integration of information, the output of integration has terminal character when integration is complete (i.e., it is completely indecomposable). Integrated information is not presented to some conscious subject, but it is consciousness itself.

The problem of homunculi belongs to the explanation of the experience dimension of the mind in the two-dimensional experience vs. agency analysis of mind. The following problem of the free will belongs to the latter dimension of agency.

5. Free Will

The mystery of human consciousness is paralleled by that of the human free will. Here too we have a lot of confusion and involvement of physics (mechanics) with its division into classical and quantum theories in the discussion, which has continued for centuries. The critical point had a place in the beginning of the 19th century when Pierre Simone de Laplace conceived in his 1814 “Essai philosophique sur les probabilities” a demon who knows the positions and momenta of all particles in the universe and acting forces at any arbitrarily selected moment. As a consequence of the principles of Newtonian mechanics, this knowledge and the ability to make calculations (even if only potential) make it possible to predict today the positions and momenta of all particles tomorrow, and therefore, based on the knowledge of what is happening today, would give him the power to predict all events of tomorrow. It did not matter whether such a demon existed or not. The fact that the positions and momenta of all particles and forces acting on them determine their state in the future is sufficient to eliminate free choice. Thus, no agent can have free will, as there is no space for choice of actions. Whatever happens must have been predetermined.

Development of quantum mechanics a century later with its indeterminism excluding possession by quantum particles definite values of position and momentum has been sometimes considered a solution of the problem. However, once again we encounter the same problem, the functional units of the mechanisms of the brain seem too big to obey the rules of quantum mechanics, and the problem of inconsistency between mechanical determinism and free will cannot be resolved so easily.

The situation became even more complicated when Benjamin Libet and collaborators performed an experiment in which subjects were asked to flex their wrists and fingers in an arbitrary way selected by them moment [68,69]. The experiment has been designed in such a way that with a high precision the timing of three crucial stages could be registered: moment of subjectively recognized intention to act (W), initiation of motor sequence in the brain indicated by the shift of readiness potential (RP), and moment of initiation of motion awareness (M) which turned out to coincide with actual motion. The experiment could be summarized as a sequence RP-350 msec-W-200msec-M. This has been interpreted as evidence that the conscious intention to act was preceded by the initiation of motor sequence, and therefore that what subjects recognized as their decision to act was actually a consequence of the earlier initiated act, excluding its voluntary character.

In a recently performed experiment which involved choice of action (pushing button with left or right hand) it has been observed that, based on the shift of brain activity, it is possible to predict with relatively high probability arbitrary choice between the actions about five to ten seconds before the awareness of choice [70].

In both experiments, the researchers asked subjects to initiate actions or to make choice of action in an arbitrary moment or in an arbitrary way, to perform the actions, according to their interpretation, voluntarily.

The outcomes of both experiments have been used as evidence for illusionary character of the free will. Such interpretation seems a result of misunderstanding of the ideas of free will and voluntary acts. There is nothing in the reports from the experiments indicating that subjects were exercising free will understood as a process in which decision is based on a conscious choice of time or mode. For instance, in Libet's experiment, subjects were recording the time at which they recognized that they will perform an act, but no actual conscious act was involved in making choice of the time to initiate the act.

Here we have instances of the confusion which has been already recognized by David Hume in his 1739 “A Treatise on Human Nature”. Free will is being exercised if the action is a result of some consciously made choice. An act cannot be qualified as voluntary based on the fact that it is executed by a conscious agent and that no external cause can be ascribed to it. After all, an act of free will can consist in refraining from any action. If experiments of Libet's type could be designed so that the decision to act involves a consciously made choice (for instance experimenter sets criteria for choice of time or mode of action or subjects themselves establish such criteria) and it is possible to predict the choice before subjects are conscious that the criteria are satisfied, then we could consider the outcomes as evidence against free will.

Another source of misunderstanding is in the research on initiation and direction of human actions outside of control of consciousness. In several experiments, evidence has been collected that subliminal priming may initiate, direct subjects' actions or may influence their effectiveness. Ruud Custers and Henk Aarts, wrote in a review article on what they called in the title “unconscious will,” that “Our behaviors seem to originate in our conscious decisions to pursue desired outcomes, or goals. Scientific research, though, suggests otherwise,” but concluded further “This suggests that the reason that subliminal priming of the goal affects goal pursuit is not that people become conscious of their motivation to pursue the goal after it is primed” [71]. If that is the case, in what sense can we interpret the results of priming as “will”? The experiments simply show that relatively complex behavior can be stimulated or controlled by internal or external factors which are outside of the control of consciousness. It does not contradict the reality or effectiveness of free will. Actually, the experiments of this type give good explanation of Libet's experiment. Not every behavior which seems volitional is actually volitional.

It is interesting that other aspects of classical mechanics have not been used in arguments against free will. For instance, the Third Principle of Classical Mechanics postulates the duality of acting forces. From the point of view of classical physics, we cannot distinguish who (what) is acting and who (what) is reacting when a force is applied. There is only interaction in which the distinction of an agent initiating interaction does not make sense.

There are also arguments against free will based on the fact that human subjects frequently misinterpret actions of others as caused by themselves (i.e., interpret as instances of exercising one's own free will events independent from their actions.) They are equally questionable, as based on the same premises we should eliminate causality at all. It is a well known fact in psychology that people frequently identify causal relationships in their absence.

All misinterpretations of the acts of free will do not eliminate the original problem of its relationship to determinism. This could be reformulated as a problem of the causal relationship between consciousness and physical reality, i.e., to the question of mind-body relationship, but this time asking not how body is “making” its consciousness, but how consciousness is influencing “its” body and physical environment. What mechanism is responsible (if any) for the influence of the states of consciousness on physical reality? More specifically, it is basically an inverted problem of the explanation of consciousness in terms of the functional units of the brain (e.g., neurons) understood as physical objects.

The demon conceived by Laplace tells us that the world of our human experience is governed by the rules of classical mechanics, so intervention of consciousness cannot change what is happening in physical reality, unless consciousness itself is the phenomenon which is governed by the same rules, i.e., is a subject of mechanical determination.

The answer based on the model of consciousness as integrated information proposed by the author is that as a consequence of the irreducibility of the structure describing integration results, we can expect that the states of consciousness have similar properties to those in quantum mechanics, although we do not have to assume that the brain itself is a quantum-mechanical system. Thus, we have space for indeterministic behavior within deterministic world.

At this point, it should be clarified in what sense we can understand this “indeterminism”. It does not mean that the behavior is random, but that it is directed by a very big volume of integrated information, which allows simultaneous influence of information items accumulated over long periods of time and resulting from interactions over big distances. It is simply impossible to reduce the decision to separate factors. It could be compared (only as an analogy) to superposition of states in quantum mechanics which in the process of the measurement is collapsing to one particular state.

It would be a legitimate question at this point whether this does not provide an argument against feasibility to implement such a structure (i.e., mechanism of information integration) in the physical world. The answer is in the negative. First, the argument is not valid, because we have physical implementations of such irreducible structures. For instance, an analogue structure appears in geometry (historically the formalism of quantum mechanics called quantum logic to which the author refers above has been derived from projective geometry). Second, it is not valid, because within classical physics we have systems which apparently violate rules of classical mechanics.

There are classical mechanical systems exhibiting indeterministic behavior. Also, an example can be found in thermodynamics. The Second Law of Thermodynamics (SLT), when expressed in terms of kinetic theory of heat is inconsistent with classical mechanics. While classical mechanics admits symmetry with respect to inversion of the direction of time, SLT excludes it. Ludwig Boltzmann showed that we can interpret SLT as a statistical rule which due to its extremely high probability for systems consisting of big number of components has never been observed to fail. However, if the system is very simple (for instance gas consisting of only a few particles), SLT can easily be violated.

It is just a speculation, but the brain is an extremely complex, open system in which it is possible to find processes which although inconsistent with classical mechanics, may have so high probability, that they follow nondeterministic laws.

If we want to refrain from speculation, it can be said only, that the mathematical properties of the structure involved in the model of information integration allows for indeterministic behavior. And this may be reflected in what we experience as free will.

6. Intentionality and Symbolic Representation

If we look for other characteristics of humanity, or of human consciousness, another difficult problem is with the use of symbols. The ability to perform “symbolic reasoning” is sometimes considered the most important characteristic distinguishing homo sapiens from other species in which more primitive forms of intelligence is observed. Also, the use of symbols is being considered a fundamental tool in developing human culture.

As is usual with fundamental concepts, the dictionary definition of a symbol as “something that stands for another thing” is just a translation of this word into more familiar vocabulary. It does not reflect the long discussion over the meaning of this term.

In the scholastic philosophy, the word “symbol” has been associated with the concept of intentionality—the capacity of human mind to direct attention to given object. Symbols were tools utilized in directing mind. Franz Brentano revived this concept in 1874 adding to it the thesis (“Brentano's thesis”) that the word “intentional” means exactly the same as “mental”. More recently “intentionality” has been frequently replaced by the word “aboutness”. And it is this aboutness which causes problems, since we have again crossing the border between the mental and the physical, or more generally between the private world of individual consciousness and objective, external reality. When we involve in this process the concept of a symbol, we have to consider its meaning, and the situation begins to become complicated. As mentioned above, Ogden and Richards in their book “The Meaning of Meaning” distinguished sixteen basic approaches to its study.

This paper does not have an intention to revive controversies over the concept of the meaning. It is enough to say that the discussion (or battle) has never ended. Since information integration opens a possible resolution of the problem, it will be presented here shortly.

It should be mentioned at the beginning that the study of information has had its own problems with the concept of meaning. Claude Shannon explicitly excluded meaning from his considerations in the famous paper introducing entropy as a measure of information. The reason for his disclaimer from the responsibility for the matters related to meaning was a curious fact that when we are using entropy as a measure of information in a message (text) there is no difference between its values whether the text makes any sense or not. Thus, entropy does not recognize the meaning of the message, and information theory developed by Shannon could only dissociate itself from semantics.

Soon after, a paper by Yoshua Bar-Hillel and Rudolph Carnap appeared with an attempt to formulate a semantic theory of information with an appropriate measure of semantic information [72]. But, the theory has had minimal impact on further development of information studies, except maybe as evidence that the problem of meaning should be avoided. There was no other attempt to develop comprehensive theory of semantic information at such a high level of generality and with the use of quantitative methods.

What went wrong? We can look for the reasons for the failure in the fact that Bar-Hillel and Carnap identified the content of a statement with all states of the universe excluded by this statement. This was already a risky step. Then, they used a trick (“for technical reasons”) to replace “states” by “state descriptions” in their approach and therefore, they actually attempted to develop a semantic theory of information using syntactic methods. This in itself does not have to be an erroneous idea, but as a critical point of crossing the border between a symbol and the meaning, it requires a careful explanation and justification, while in their paper its importance was totally ignored. The next step was not much more than the application of Carnap's logical probability to developing a measure of information in analogy to Shannon's entropy based on traditional probability measure.

For the subject of the present paper, it is important to observe that, even in the case of information, the difficulty was in crossing the border between a symbol and its meaning. The ghost of “aboutness” pointing from the mental reality to the external world could not be overcome by traditional methods and the use of familiar concepts. For instance, in their paper Bar-Hillel and Carnap were using language and logic as given concepts which do not require analysis from the point of view of information in spite of well known difficulties with the understanding of meaning in the context of language.

In the approach using information integration the concept of a symbol is introduced exclusively in terms of information. First, we have to recall the definition of an information carrier. It is any instance of the many used in the definition of information (that which is made one by information). Then we make a distinction between integrated information (we can call it “identity”) and not integrated (“state”). Integrated chunks of information can be identified with the identity of an object, not integrated with the state of this object. In the mathematical formalism “identity” refers to fully irreducible (coherent) factor of the structure describing information, while “state” refers to so called center of this structure.

At this point, a technical remark is necessary to the reader familiar with the formalism of quantum mechanics to be cautious about terminological inconsistency. In this paper, we can talk only about superposition of identities of objects (e.g., electrons), as the states are defined by superselection rules and have purely classical properties. The reason for that inconsistency is that our terminology becomes consistent with the common sense concepts of the identity and states.

It is easy to recognize that the concept of identity has some resemblance to the “nature” or “essence” of an object in scholastic philosophy, while “state” can be associated with the concept of an “accidens”. However, here we do not refer to any concept of substance, as the concept of information as defined here is formulated in terms of different ontological categories of one and many.

Now we can proceed to the concept of symbolic representation. Symbolic representation is understood as a function between two carriers of information which is mapping states of one into the states of the other. Typically, the image (symbol) has much smaller measure of information. Thus, symbolic representation has a function of lowering the amount of information handled by the mechanisms responsible for consciousness.

From the point of view of psychology, it was a necessary adaptation of the brain to overcome limitations of the “span of prehension”, the subject of the famous paper of George Miller “Magical Number Seven, Plus or Minus Two” [73]. The integrated “chunks” of information are being replaced by other, smaller integrated “chunks” of information. There is no need to cross any borders, as here simply information is replacing information, or as someone may prefer, information points to information. The only limitation is that replacement is not completely arbitrary, as the correspondence is being made between pieces of integrated information.

In order to represent a complex of essential properties in the symbolic way (as it is being done in the classical genus-species definition) we are using a trick to lift up to a genus (deprived of some essential, i.e., identity related properties) and then we consider these properties as accidental, i.e., “varying”, state related ones. For this purpose we have to develop some structural theory of identities which will be presented elsewhere with an appropriate form of the “syllogistic of information.”

At this point, the author believes it is possible to overcome problem which doomed the attempt made by Bar-Hillel and Carnap, who tried to develop logical theory of information using linguistic logic. We should start with developing logic of information, which in the specific case of linguistic information systems can reduce to linguistic logic.

The concept of symbolic representation as mapping of one information system to another is giving us an approach which does not require switching between different modes of the ontological status. Symbolic representation is here a relationship within the complex of consciousness between its sectors of integration (coherence).

7. Aesthetical Experience

The last issue to be considered here is much less controversial, but important for the understanding of human mind and its relationship to the mechanisms in the brain responsible for consciousness. It is related to explanation of aesthetical experience, which without doubt involves functioning of affective mechanisms. As mentioned earlier, the primary role of affective mechanisms is in the discrimination of stimuli, selection of responses and motivation of actions. It is an extensive field of study in which the role of information integration is not easy to identify. However, the higher functions in which emotions are transformed into feelings present in consciousness even at first sight exhibit forms of information integration, although it is not very clear how much it is related to consciousness, as phenomenally they look like “shortcutting” conscious processes [74].

The basic socio-biological repertoire of arguments consists of examples how emotionally based selections can be explained as optimization of chances for successful reproduction. For instance, attractiveness of female faces or body proportions is explained as a product of the holistic and non-rational evaluation of a candidate for a female partner in reproduction (fertility and absence of competition from earlier offspring with other male partners) which in our conscious experience appears as feminine beauty [75,76]. Although the case of socio-biological explanation of the beauty of female faces can serve as an example of suspicious scientific methodology (there was a dramatic change in the experimental evidence from apparent preference of average features [77], to preference of exaggerated selective features [78] which led to the same conclusions), it is easy to accept the view that perception of female attractiveness is a factor in the mating selection.

It is quite clear that judgment of feminine beauty is a result of information integration in combination with the functioning of the affective mechanisms with important influence on other affective mechanisms (expressed in eroticism) which complicates the analysis. This is why in this paper the focus is on the concepts of aesthetics understood as a study of “disinterested judgment of beauty”, which has very long intellectual tradition.

Actually, the original meaning of the concept of aesthetics, was introduced by Alexander Gottlieb Baumgarten in his 1735 “Meditationes philosophiae de nonnullis ad poema pertinentibus” as a science of sensuous knowledge concentrated on the beauty in contrast to logic occupied with the truth. Even earlier, the view of the beauty as an expression of integration was presented by Francis Hutcheson in his “Enquiry into the Original of Our Ideas of Beauty and Virtue” where he identifies beauty with “Uniformity amidst Variety” [79,11]. More recently, Gregory Bateson used the view that universality of the experience of beauty documented by cross-cultural art appreciation can be explained by the faculty of integration in the perceptive mechanisms common for all people belonging to all cultures [80].

Baumgarten conceived aesthetics as a science of sensitive knowing (scientia cognitionis sensitivae) distinct and independent from logical knowing, “a science that touches neither on the nature of art per se nor on its social import, but on the direct sensuous apprehension of actuality” [81]. For him, the world was constituted by the relations of greater and lesser wholes, the greatest unity and variety of its actual states. Therefore, sensuous perfection is attained by the greatest unity of the variety of perceptions within a singular image. It can be interpreted that sensuous knowledge consists in the ability to integrate the variety of perceptions.

Baumgarten's science of sensuous knowledge had many predecessors who more directly referred to the concept of beauty, like Hutchison. Although, the latter did not set any distinction between sensuous and logical knowledge seeing the same unifying power in the objects of art and in mathematical theory. Both were obviously heavily influenced by the philosophy of John Locke.

However, the sources of the idea that beauty is an integrative power are in much more remote past. Samuel Taylor Coleridge in his “Aesthetical Essays” in which the principle of beauty is described as “multeity in unity” or “that in which the many, still seen as many becomes one,” refers back to the Pythagorean definition “the reduction of many to one” [82].

No philosopher of the Antiquity was more preoccupied with unity than the initiator of Neoplatonism, Plotinus. In the “Enneads” there are many references to the concept of beauty in terms of the diversity becoming a unity. In first Ennead he wrote “Only a compound can be beautiful, never anything devoid of parts; and only a whole; the several parts will have beauty, not in themselves, but only as working together to give a comely total” [83].

In the 19th century, under the influence of Romanticism, aesthetics became more a theoretical basis for art criticism than the study of sensuous knowledge and its conceptual framework expanded with the shift of interest towards expression. The concept of beauty has been overshadowed by that of the sublime and other qualities evoking emotional states. But, the presence of the issues of integration can be seen in the discussions on art continuing until present time.

The perspective developed within the original discipline of aesthetics allows for an easy association of aesthetic experience with the faculty of information integration critical for the existence of consciousness. This suggests what could be the reason for the universal presence of art in human culture as a form of exercise in conscious experience.

Steven Mithen presented the view that early cave paintings were forms of encyclopedia introducing young members of communities into their future functions and teaching them for instance which animals should be hunted [84]. It seems more likely that art indeed had pedagogic functions, but more as a way to learn about constructing a uniform type of consciousness. After all, it is not obvious, nor necessary that the way of information integration is innate. Painted compositions were showing what forms of information integration are necessary for participation in the communal life and common understanding of reality.

8. Conclusions

Several ideas of this paper have been presented in a concise form earlier in author's contribution to the Proceedings of the 4th International Conference on the Foundations of Information Science in Beijing, but in the context of methodology of information science [85].

This article is focused on the special role of the concept of information understood in terms of the one-many categorical opposition in building a bridge between mind and brain. This particular choice of the definition of information allows to unify the main two manifestations of information considered in literature, the selective and the structural.

The selective manifestation can be interpreted as an analogue of Shannon's orthodox approach to information which has been successfully applied to the study of information aspects of physical reality, including brain structures. The structural manifestation has never previously acquired a clearly defined conceptual framework being typically loosely associated with philosophical concept of form. However in the context of the definition of information used here it is possible not only define its meaning, but also to develop its formal characteristic by the level of integration. The transition to the higher level of integration, information integration can be expressed as an algebraic property of underlying mathematical structure.

The article shows that the concept of information with the dual selective and structural manifestations allowing the consideration of information integration can be used to explain the unity of conscious experience, and furthermore to resolve several fundamental problems such as understanding the experiential aspect of consciousness while avoiding homunculus fallacy, defending free will against mechanistic determinism of the brain, explaining symbolic representation and aesthetical experience.

Thus, the dual character of selective and structural manifestations opens the way between information scientific description of the brain in terms of the former, and description of mind in terms of the latter and of the corresponding concept of information integration.

References

  1. Kroeber, A.L. Culture: A Critical Review of Concepts and Definitions (Papers of the Peabody Museum of American Archaeology and Ethnology, Harvard University); Peabody Museum of American Archaeology: Andover, MA, USA, 1952; Volume 42. [Google Scholar]
  2. Lovejoy, A.O. Nature as aesthetic norm. Mod. Lang. Notes 1927, 7, 444–450. [Google Scholar]
  3. Ogden, C.K.; Richards, I.A. The Meaning of Meaning: A Study of the Influence of Language Upon Thought and of the Science of Symbolism; Harcourt Brace Jovanovich: San Diego, CA, USA, 1989. [Google Scholar]
  4. Shannon, E.C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27. [Google Scholar]
  5. Bar-Hillel, Y. An Examination of Information Theory. Philos. Sci. 1955, 22, 86–105. [Google Scholar]Reprinted in; Language and Information: Selected Essays on Their Theory and Application; Addison-Wesley: Reading, MA, USA, 1964; pp. 275–297.
  6. Bochenski, I.M. Ancient Formal Logic. In Studies in Logic and the Foundations of Mathematics, 2nd ed.; North-Holland: Amsterdam, The Netherlands, 1957. [Google Scholar]
  7. Hadamard, J. The Psychology of Invention in the Mathematical Field; Princeton University Press: Princeton, NJ, USA, 1945. [Google Scholar]
  8. Turing, A. On computable numbers, with an application to the Entscheidungsproblem. Proc. London Math. Soc., Ser. 2 1936, 42, 230–265. [Google Scholar]
  9. Recorde, R. The Declaration of the Profit of Arithmeticke. In The World of Mathematics; Newman, J.R., Ed.; Simon and Schuster: New York, NY, USA, 1956; pp. 212–217. [Google Scholar]
  10. Gray, H.M.; Gray, K.; Wegner, D.M. Dimensions of Mind Perception. Science 2007, 315, 619. [Google Scholar]
  11. Schroeder, M.J. Philosophical Foundations for the Concept of Information: Selective and Structural Information. Proceedings of the 3rd International Conference on the Foundations of Information Science, Paris, France, 4–7 July 2005.
  12. Schroeder, M.J. An alternative to entropy in the measurement of information. Entropy 2004, 6, 388–412. [Google Scholar]
  13. Schroeder, M.J. Quantum Coherence without Quantum Mechanics in Modeling the Unity of Consciousness. Proceedings of the Quantum Interaction, Third International Symposium, QI 2009, Saarbrücken, Germany, 25–27 March 2009; Bruza, P., Ed.; Springer: Berlin, Germany, 2009; pp. 97–112. [Google Scholar]
  14. Schroeder, M.J. Model of Structural Information Based on the Lattice of Closed Subsets. Proceedings of the Tenth Symposium on Algebra, Languages, and Computation; Kobayashi, Y., Adachi, T., Eds.; Toho University: Funabashi, Japan, 2006; pp. 32–47. [Google Scholar]
  15. Schroeder, M.J. Logico-algebraic Structures for Information Integration in the Brain. Proceedings of the RIMS 2007 Symposium on Algebra, Languages, and Computation; Shoji, K., Ed.; Kyoto University: Kyoto, Japan, 2007; pp. 61–72. [Google Scholar]
  16. Jauch, J.M. Foundations of Quantum Mechanics; Addison-Wesley: Reading, MA, USA, 1968. [Google Scholar]
  17. Chalmers, D. The Blackwell Companion to Consciousness; Velmans, M., Schneider, S., Eds.; Blackwell: Malden, MA, USA, 2007; pp. 225–235. [Google Scholar]
  18. Ananthaswamy, A. The mind unshackled. New Sci. 2009, 204, 34–36. [Google Scholar]
  19. Markus, H.R.; Kitayama, S. Culture and the self: Implications for cognition, emotion, and motivation. Psychol. Rev. 1991, 98, 224–253. [Google Scholar]
  20. Barnes, J. Early Greek Philosophy; London Penguin: London, UK, 2001; p. 43. [Google Scholar]
  21. Barnes, J. Early Greek Philosophy; London Penguin: London, UK, 2001; p. 71. [Google Scholar]
  22. Chan, W.-T. A SourceBook in Chinese Philosophy; Princeton University Press: Princeton, NJ, USA, 1973; p. 159. [Google Scholar]
  23. Dalley, S. Myths from Mesopotamia: The Creatio, the Flood, Gilgamesh, and Others; Oxford University Press: Oxford, UK, 1989. [Google Scholar]
  24. Hofstadter, A.; Kuhns, R. Philosophies of Art and Beauty: Selected Readings in Aesthetics from Plato to Heidegger; University of Chicago Press: Chicago, IL, USA, 1976. [Google Scholar]
  25. Thorndike, E.L. Intelligence and its measurement: A symposium. J. Educ. Psychol. 1921, 12, 3–24. [Google Scholar]
  26. James, W. The One and the Many. In Pragmatism: A New Name for Some Old Ways of Thinking; James, W., Ed.; Longman 's Green and Co.: New York, NY, USA, 1947. [Google Scholar]
  27. James, W. The Principles of Psychology; Holt: New York, NY, USA, 1890. [Google Scholar]
  28. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar]Reprinted in; The Mind's I: Fantasies and Reflections on Self and Soul; Hofstadter, D., Dennet, D., Eds.; Basic Books: New York, NY, USA, 1981.
  29. McCulloch, W.S.; Pitts, W.H. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar]
  30. Searle, J. Minds, brains, and programs. Behav. Brain Sci. 1980, 3, 417–424. [Google Scholar]
  31. Lucas, J.R. Minds, machines and godel1. Philosophy 1961, 36, 112–127. [Google Scholar]
  32. McCulloch, W.S.; Pitts, W.H. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys 1943, 5, 115–133. [Google Scholar]
  33. Arbib, M.A. Brains, Machines, and Mathematics, 2nd ed.; Springer: New York, NY, USA, 1987. [Google Scholar]
  34. Koch, C.; Segev, I. The role of single neurons in information processing. Nat. Neurosci. Suppl. 2000, 3, 1171–1177. [Google Scholar]
  35. Marcus, G. The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought; Basic Books: New York, NY, USA, 2004. [Google Scholar]
  36. Pribram, K.H.; Nuwer, M.; Baron, R. The Holographic Hypothesis of Memory Structure in Brain Function and Perception. In Contemporary Developments in Mathematical Psychology; Atkinson, R.C., Krantz, D.H., Luce, R.C., Suppes, P., Eds.; W.H. Freeman: San Francisco, CA, USA, 1974. [Google Scholar]
  37. Hameroff, S.R. Chi: A neural hologram? Am. J. Clin. Med. 1974, 2, 163–170. [Google Scholar]
  38. Frohlich, H. The extraordinary dielectric properties of biological materials and the action of enzymes. Proc. Natl. Acad. Sci. USA 1975, 72, 4211–4215. [Google Scholar]
  39. Marshall, I.N. Consciousness and Bose-Einstein condensates. New Ideas Psychol. 1989, 7, 73–83. [Google Scholar]
  40. Beck, F.; Eccles, J.C. Quantum aspects of consciousness and the role of consciousness. Proc. Natl. Acad. Sci. USA 1992, 89, 11357–11361. [Google Scholar]
  41. Eccles, J.C. How the Self Controls Its Brain; Springer: Berlin, German, 1994. [Google Scholar]
  42. Beck, F.; Eccles, J.C. Quantum Processes in the Brain: A Scientific Basis of Consciousness. In Neural Basis of Consciousness; Osaka, N., Ed.; John Benjamins: Amsterdam, The Netherlands, Philadelphia, PA, USA, 2003; pp. 141–166. [Google Scholar]
  43. Penrose, R. Shadows of the Mind: A Search for the Missing Science of Consciousness; Oxford University Press: Oxford, UK, 1994. [Google Scholar]
  44. Hameroff, S.R.; Penrose, R. Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. J. Conscious. Stud. 1996, 3, 36–53. [Google Scholar]
  45. Tegmark, M. Importance of quantum decoherence in brain process. Phys. Rev. 2000, E61, 4194–4206. [Google Scholar]
  46. Hagen, S.; Hameroff, S.R.; Tuszynski, J.A. Quantum computation in brain microtubules: Decoherence and biological feasibility. Phys. Rev. 2002, E65, 061901(1)–061901(11). [Google Scholar]
  47. Gush, R.; Churchland, P.S. Gaps in Penrose's toilings. J. Conscious. Stud. 1995, 2, 10–29. [Google Scholar]
  48. Churchland, P.S. Brain-Wise: Studies in Neurophilosophy; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  49. Stapp, H.P. Why classical mechanics cannot naturally accommodate consciousness but quantum mechanics can. 1995, 2, arXiv:quant-ph/95020012v.1.
  50. Stapp, H. Quantum mechanical theories of consciousness. In The Blackwell Companion to Consciousness; Velmans, M., Schneider, S., Eds.; Blackwell: Malden, MA, USA, 2007; pp. 300–312. [Google Scholar]
  51. Edelman, G.M. Wider Than the Sky: The Phenomenal gift of Consciousness; Yale University Press: New Haven, NJ, USA, 2004. [Google Scholar]
  52. Tononi, G.; Edelman, G.M. Consciousness and complexity. Science 1998, 282, 1846–1851. [Google Scholar]
  53. Tononi, G.; Edelman, G.M. Consciousness and the integration of information in the brain. Adv. Neurol. 1998, 77, 245–280. [Google Scholar]
  54. Tononi, G. The information integration theory of consciousness. In The Blackwell Companion to Consciousness; Velmans, M., Schneider, S., Eds.; Blackwell: Malden, MA, USA, 2007; pp. 287–299. [Google Scholar]
  55. Barlow, H.B. Single units and sensation: A neuron doctrine for perceptual psychology. Perception 1972, 1, 371–394. [Google Scholar]
  56. Hubel, D.H.; Wiesel, T. Receptive fields of single neurons in the cat's striate cortex. J. Physiol. 1959, 148, 574–591. [Google Scholar]
  57. Baylis, G.C.; Rolls, E.T.; Leonard, C.M. Selectivity between faces in the responses of a population of neurons in the cortex in the superior temporal sulcus of the monkey. Brain Res. 1985, 342, 91–102. [Google Scholar]
  58. Quaian Quiroga, R.; Reddy, L.; Kreiman, G.; Koch, C.; Fried, I. Invariant visual representation by single neurons in the human brain. Nature 2005, 435, 1102–1107. [Google Scholar]
  59. Barlow, H.B. The Neuron Doctrine in Perception. In The Cognitive Neurosciences; Gazzaniga, M., Ed.; MIT Press: Cambridge, MA, USA, 1995; pp. 415–435. [Google Scholar]
  60. Rolls, E.T.; Deco, G. Computational Neuroscience of Vision; Oxford University Press: Oxford, UK, 2002. [Google Scholar]
  61. Dennett, D.C. Darwin's Dangerous Idea; Simon & Schuster: New York, NY, USA, 1995. [Google Scholar]
  62. Ashby, W.R. An Introduction to Cybernetics; Chapman & Hall: London, UK, 1956. [Google Scholar]
  63. Heylighen, F. Principles of Systems and Cybernetics: An Evolutionary Perspective. In Cybernetics and Systems '92; Trappl, R., Ed.; World Science: Singapore, 1992; pp. 3–10. [Google Scholar]
  64. Edelman, G.M. Wider Than the Sky: The Phenomenal Gift of Consciousness; Yale University Press: New Haven, NJ, USA, 2004. [Google Scholar]
  65. Tononi, G.; Edelman, G.M. Consciousness and complexity. Science 1998, 282, 1846–1851. [Google Scholar]
  66. Tononi, G.; Edelman, G.M. Consciousness and the integration of information in the brain. Adv. Neurol. 1998, 77, 245–280. [Google Scholar]
  67. Libet, B. The experimental evidence for a subjective referral of a sensory experience backwards in time. Philos. Sci. 1981, 48, 182–197. [Google Scholar]
  68. Libet, B.; Gleason, C.A.; Wright, E.W.; Perl, D.K. Time of conscious intention to act in relation to onset of cerebral activity (readiness potential): The unconscious initiation of freely voluntary act. Brain 1983, 106, 623–642. [Google Scholar]
  69. Libet, B. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behav. Brain Sci. 1985, 8, 529–539. [Google Scholar]
  70. Soon, C.S.; Brass, M.; Heinze, H.-J.; Haynes, J.-D. Unconscious determinants of free decisions in the human brain. Nat. Neurosci. 2008, 11, 543–545. [Google Scholar]
  71. Custers, R.; Aarts, H. The unconscious will: How the pursuit of goals operates outside of conscious awareness. Science 2010, 329, 47–50. [Google Scholar]
  72. Bar-Hillel, Y.; Carnap, R. An Outline of a Theory of Semantic Information; Technical Report No. 247; 1952; Research Laboratory of Electronics, MIT. [Google Scholar]reprinted in; Language and Information: Selected Essays on Their Theory and Application; Bar-Hillel, Y., Ed.; Addison-Wesley: Reading, MA, USA, 1964; pp. 221–274.
  73. Miller, G. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 1956, 63, 81–97, (reprinted in 1994, 101, 343–352). [Google Scholar]
  74. Ornstein, R.E. The Evolution of Consciousness; Simon & Schuster: New York, NY, USA, 1991. [Google Scholar]
  75. Pinker, S. How the Mind Works; Norton: New York, NY, USA, 1997. [Google Scholar]
  76. Wilson, E.O. Consilience: The Unity of Knowledge; Vintage Books: New York, NY, USA, 1999. [Google Scholar]
  77. Langlois, J.H.; Roggman, L.A. Attractive faces are only average. Psychol. Sci. 1990, 1, 115–121. [Google Scholar]
  78. Perrett, D.I.; May, K.A.; Yoshikawa, S. Facial shape and judgments of female attractiveness. Nature 1994, 368, 239–242. [Google Scholar]
  79. Hutcheson, F. An Initial Theory of Taste. In An Inquiry into the Original of Our Ideas of Beauty and Virtue; J. Darby: London, UK, 1729. [Google Scholar]Reprinted in; Aesthetics: A Critical Anthology, 2nd ed.; Dickie, G., Sclafani, R., Roblin, R., Eds.; St. Martin's Press: New York, NY, USA, 1989; pp. 223–241.
  80. Bateson, G. Style, Grace, and Information in Primitive Art. In Steps to an Ecology of Mind; Bateson, G., Ed.; Ballantine: New York, NY, USA, 1972; pp. 128–156. [Google Scholar]
  81. Davey, N. Baumgarten, Alexander (Gottlieb). In A Companion to Aesthetics; Cooper, D., Ed.; Blackwell: Malden, MA, USA, 1995; pp. 40–42. [Google Scholar]
  82. Jasper, D. Coleridge, Samuel Taylor. In A Companion to Aesthetics; Cooper, D., Ed.; Blackwell: Malden, MA, USA, 1995; pp. 74–76. [Google Scholar]
  83. Hofstadter, A.; Kuhns, R. Philosophies of Art and Beauty: Selected Readings in Aesthetics from Plato to Heidegger; University of Chicago Press: Chicago, IL, USA, 1976. [Google Scholar]
  84. Mithen, S. The Prehistory of the Mind; Thames and Huston: London, UK, 1996. [Google Scholar]
  85. Schroeder, M.J. Foundations for Science of Information: Reflection on the Method of Inquiry. Proceedings of the 4th International Conference Foundation Information Science, Beijing, China, 21–24 August 2010; pp. 1–9.

Share and Cite

MDPI and ACS Style

Schroeder, M.J. Concept of Information as a Bridge between Mind and Brain. Information 2011, 2, 478-509. https://doi.org/10.3390/info2030478

AMA Style

Schroeder MJ. Concept of Information as a Bridge between Mind and Brain. Information. 2011; 2(3):478-509. https://doi.org/10.3390/info2030478

Chicago/Turabian Style

Schroeder, Marcin J. 2011. "Concept of Information as a Bridge between Mind and Brain" Information 2, no. 3: 478-509. https://doi.org/10.3390/info2030478

Article Metrics

Back to TopTop