Next Article in Journal
Pervasive Computing, Privacy and Distribution of the Self
Previous Article in Journal
Experimental Approaches to Referential Domains and the On-Line Processing of Referring Expressions in Unscripted Conversation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Floridi’s “Open Problems in Philosophy of Information”, Ten Years Later

1
Mälardalen University, School of Innovation, Design and Engineering, Box 883, SE-72123 Västerås, Sweden
2
University of Technology, Karlsplatz 13, 1040 Wien, Austria
*
Author to whom correspondence should be addressed.
Information 2011, 2(2), 327-359; https://doi.org/10.3390/info2020327
Submission received: 7 April 2011 / Revised: 2 May 2011 / Accepted: 12 May 2011 / Published: 23 May 2011

Abstract

: In his article Open Problems in the Philosophy of Information [1] Luciano Floridi presented a Philosophy of Information research program in the form of eighteen open problems, covering the following fundamental areas: Information definition, information semantics, intelligence/cognition, informational universe/nature and values/ethics. We revisit Floridi's program, highlighting some of the major advances, commenting on unsolved problems and rendering the new landscape of the Philosophy of Information (PI) emerging at present. As we analyze the progress of PI we try to situate Floridi's program in the context of scientific and technological development that have been made last ten years. We emphasize that Philosophy of Information is a huge and vibrant research field, with its origins dating before Open Problems, and its domains extending even outside their scope. In this paper, we have been able only to sketch some of the developments during the past ten years. Our hope is that, even if fragmentary, this review may serve as a contribution to the effort of understanding the present state of the art and the paths of development of Philosophy of Information as seen through the lens of Open Problems.

1. Introduction

In his programmatic paper Open Problems in the Philosophy of Information [1] based on the Herbert A. Simon Lecture in Computing and Philosophy given at Carnegie Mellon University in 2001, Luciano Floridi lists five of the most interesting areas with eighteen fundamental questions for the field he named Philosophy of Information. The Open Problems program includes many already existing topics to which researchers had been contributing even before 2001, but it is also asking new questions or putting existing ones into a new context, aiming at organizing them into a coherent system.

The aim of the present paper is to address Floridi's program from a 10-years distance. What have we learned? What do we expect to learn in the future? The reader interested in the history of Philosophy of Information will find more background in the van Benthem and Adriaans' Philosophy of Information [2] handbook, which describes contributions by Charles Sanders Peirce, Norbert Wiener, Alan Turing, William Ross Ashby, Claude Shannon, Warren Weaver, Gregory Bateson, Fred Dretske, Jon Barwise, John R. Perry, Brian Cantwell Smith, Rafael Capurro and others. We want to focus on some of the recent developments we find worth bringing into the context of Floridi's paper.

We can trace the origins of the program back to 1999 when Floridi's book Philosophy and Computing: An Introduction [3] appeared, immediately followed by the first shift towards an information-centric framework in the article Information Ethics: On the Philosophical Foundations of Computer Ethics, [4]. The development from the first, more concrete technology- and practice-based approach towards the abstract information-centric account is evident in the coming decade which will result in numbers of articles developing several strands of the program declared in Metaphilosophy in 2004. Floridi has significantly contributed to the development of Information Ethics, Semantic Theory of Information, Logic of Information and Informational Universe/Nature (Informational Structural Realism)—to name the most important moves ahead. This article is based on the following works of Floridi: [1,3-27].

In 2008 Floridi edited the book Philosophy of Computing and Information-5 Questions [17] with contributions by Boden, Braitenberg, Cantwell-Smith, Chaitin, Dennett, Devlin, Dretske, Dreyfus, Floridi, Hoare, McCarthy, Searle, Sloman, Suppes, van Benthem, Winograd and Wolfram. The last of five questions each of the distinguished interviewees answered was: “What are the most important open problems concerning computation and/or information and what are the prospects for progress?” Among answers there are suggestions of the need for synthetic approach to Cognitive Science (including symbolic, connectionist, situated, dynamical, and homeostatic)—“because all of them (and probably more) will be needed to emulate the rich space of possible minds” (Boden); emphasis on the importance of complexity (Braitenberg); use of computers serving as “laboratories of middling complexity” “in terms of which to explore issues of intentionality, embodiment, and semantics.” (Smith); Mathematics, Biology and Metabiology (Chaitin); solid theory of semantic information (Dennett); better understanding of natural language (Devlin); better understanding of the concept of information (Dretske); learning and relevance in embodied AI (Dreyfus); further development of the PI as Philosophia Prima (Floridi); error-free software (Hoare); experimental philosophy in a computing lab (McCarthy); move from computational Cognitive Science to Cognitive Neuroscience (Searle); “Understanding the variety of types of virtual machines and the variety of ways in which virtual machines can be implemented or realized in physical machines or other virtual machines” (Sloman); understanding how large collections of synchronized neurons are computing, with all relevant physics and chemistry (Suppes); the interplay between statics and dynamics, information and process (van Benthem); the decoding of thought (Winograd); mining the computational universe for new ideas in science and philosophy (Wolfram).

The state of the art of the research field is reflected in the special issue of the journal The Information Society titled “The Philosophy of Information, its Nature and Future Developments”, published in 2009 and edited by Luciano Floridi [21], which addresses: Floridi's Philosophy of Information and Information Ethics (Ess); the Philosophy of Information culture (Briggle and Mitcham); epistemic values and information management (Fallis and Whitcomb); information and knowledge in information systems (Willcocks and Whitley), starting with Floridi's introduction: “The Information Society and its Philosophy”.

The recent (April 2010) special issue of Metaphilosophy, [28], the same journal that published Floridi's program in 2004, was devoted to the theme “Luciano Floridi and the Philosophy of Information” (PI) and guest edited by Patrick Allo. It is addressing issues of knowledge (Roush and Hendricks), agency (Bringsjord), semantic information (Scarantino and Piccinini; Adams), methodology (Colburn and Shute), metaphysics (Bueno) and ethics (Volkman) with an epilogue by Bynum on the philosophy in the information age. It gives a good state of the art insight into the development of PI.

“Luciano Floridi's Philosophy of Technology: Critical Reflections” is a topic of a special issue of Knowledge, Technology & Policy [29], published in June 2010, guest edited by Hilmi Demir. It contains several articles on PI, addressing informational realism (Gillies), contradictory information (Allo), epistemology of AI (Ganascia), perceptual evidence and information (Piazza), ethics of democratic access to information (da Silva), logic of ethical information (Brenner), the demise of ethics (Byron), information as ontological pluralism (Durante), a critique of Information Ethics (Doyle), pre-cognitive semantic information (Vakarelov), an argument that typ-ken (an amalgam of type and token) drives infosphere (Gunji et al.). The special issue ends with Floridi's responses to each of the articles.

Floridi's newly published book, The Philosophy of Information [27] shows the up-to-date state of his view of the subject. It presents his contributions to the research field and contains his widely known work which confirms the relevance of our account when it comes to Floridi's main contributions.

Besides Floridi, a number of researchers have contributed, directly or indirectly to the advancement of the field and offered interesting solutions and insights into the nature of information, its dynamics and its cognitive aspects. In what follows we will try to list some of those contributions. As we analyze the present state of the art of Philosophy of Information we try to situate the PI program in the context of scientific and technological developments that have been made over the past ten years and see their impact on the directions of PI research.

2. Open Problems Revisited

Floridi's Open Problems cover a huge ground with five areas: Information definition, information semantics, intelligence/cognition, informational universe/nature and values/ethics. The task of assessment in one article of the progress achieved in one decade seems overwhelming. Nevertheless, let us make an attempt to re-examine the program and see what the listed questions look like today, without any pretense as to the completeness of the account. Even if fragmentary, this review may serve as a contribution to the effort of understanding the present state of the art and the paths of development. We will find many novel ideas and suggested answers to the problems arisen in the course of the development of Philosophy of Information. In order to elucidate the results of the progress made, we will present different and sometimes opposing views, hoping to shed more light on various aspects of the development and the future prospects.

2.1. Information Definition

2.1.1. What is Information?

One of the most significant events since 2004 was the publishing of the Philosophy of Information, Handbook of the Philosophy of Science [2]. The Part B of the book, titled Philosophy of Information: Concepts and History, includes essays on Epistemology and Information (Dretske), Information in Natural Language (Kamp and Stokhof), Trends in Philosophy of Information (Floridi) and Learning and the Cooperative Computational Universe (Adriaans). From that part we can gain the insight in various facets of the concept, providing supporting evidence that nowadays concepts of information present a complex body of knowledge that accommodates different views of information through fields of natural, social and computer science. Or, as Floridi [6] formulates it, “Information is such a powerful and elusive concept that it can be associated with several explanations, depending on the requirements and intentions.”

The discussion of the concept of information was shortly after Floridi's program declaration in the Herbert Simon Lecture in 2001 a subject of a lively discussion, and van Benthem and Adriaans [2] point to a special issue of the Journal of Logic, Language and Information [30], edited by van Benthem and van Rooij, and dedicated to the study of different facets of information. At the same time Capurro and Hjørland [31] analyze the term “information” as a typical interdisciplinary concept, its role as a constructive tool and its theory-dependence. They review significant contributions to the theory of information over the past quarter of century from physicists, biologists, systems theorists, philosophers and library and information scientists. The concept of information as it appears in different domains is fluid, and changes its nature as it is used for special purposes in various theoretical and practical settings. As a result, an intricate network of interrelated concepts has developed in accordance with its uses in various contexts. In Wittgenstein's philosophy of language, this situation is described as family resemblance, applied to the condition in which some concepts within a concept family share some resemblances, while other concepts share others. “The view epitomized by Wittgenstein's Philosophical Investigations is that meaning, grammar and syntactic rules emerge from the collective practices through the situated, changing, meaningful use of language of communities of users (Gooding, 2004b)” [32].

Information can be understood as range of possibilities (the opposite of uncertainty); as correlation (and thus structure), and information can be viewed as code, as in DNA, according to van Bentham and Martinez in [2] (p. 218). Furthermore, information can be seen as dynamic rather than static; it can be considered as something that is transmitted and received, it can be looked upon as something that is processed, or it can be conceived as something that is produced, created, constructed [33]. It can be seen as objective or as subjective. It can be seen as thing, as property or as relation. It can be seen from the perspective of formal theories or from the perspective of informal theories [34] (p. 253). It can be seen as syntactic, as semantic or as pragmatic phenomenon, and it can be seen as manifesting itself throughout every realm of our natural and social world.

In this context it is important to mention the contribution of the FIS (Foundations of Information Science) network that “from its very beginnings in early 90's” presented “an attempt to rescue the information concept out from its classical controversies and use it as a central scientific tool, so as to serve as a basis for a new, fundamental disciplinary development—Information Science.” by Marijuan [35].

Among initiatives with the aim to work towards a modern concept of information, a workshop entitled Information Theory and Practice took place in 2007 at Duino, focusing on the difference between syntactic (Shannon) and semantic information.

In 2008, a project was started in León, Spain, aiming at the illumination of the concept of information. Its working principle resembles the mosaic window of the Cathedral of León. That's why it is named “BITrum” (after the Latin “vitrum”) [36].

“Towards a New Science of Information” was the motto of the Fourth International FIS Conference held in Beijing in 2010. The proceedings of the conference will be published in a special issue of the journal triple-c. The topics addressed include: Informatics at multiple scales (Kirby et al.), information in scientific use (Collier), information in reality, logic and metaphysics (Brenner), reductionist, projectivist, disjunctivist, and integrativist thinking about information (Hofkirchner), the identity of objects (Hewitt), autopoiesis, observation and informatics (Hashimoto), the relationship between autopoiesis and biosemiotics (Nishida), the informational essence; information cognition; information sciences (Kun Wu), social information (Cai), philosophy of information in China (Tianqi Wu), method of inquiry (Schroeder), life informatics (Gao), information needs and signaling resources of mycobacterium (Navarro and Marijuan), information and cognition (Díaz Nafría, Pérez-Montoro), Science of Information (Doucette), abduction (Kamiura), and many more.

In [37], an essentially new approach (called parametric definition) is proposed by Burgin in order to solve the problem with the definition of information.

Besides already mentioned information types, additional distinction ought to be made between the symbolic and sub-symbolic information, as well as conscious and sub-conscious information [38], seen from a cognizing agent's perspective. The world modeled as informational structure with computational dynamics, presents “proto-information” (Dodig Crnkovic) for an agent [39] and it affects an agent's own physical structures, as not all of functions of our body are accessible for our conscious mind. This process of information communication between an agent and the rest of the world goes directly, subconsciously, sub-symbolically or via semiosis—sense-making information processing. In this approach, information undergoing restructuring from proto-information in the world to meaningful information in an agent on several levels of organization is modeled as purely natural phenomenon. Cognitive functions of an agent, even though implemented in informational structures, are not identical with structures themselves but present their dynamics that is computational processes.

The quest for a general concept of information that goes beyond family resemblances is still there as can be testified by several publications during the last decade e.g., by Lyre [40], von Baeyer [41], Roederer [42], Seife [43], Dodig-Crnkovic [44], Muller [45], Brier [46], Kauffman et al. [47], Hofkirchner [48], Burgin [37], Davies and Gregersen [49], and Dodig-Crnkovic and Burgin [50]. It seems legitimate to put the heuristic questions accordingly, ‘Can the static and the dynamic aspect of information be integrated when considering the static as result, and starting point, of the dynamic aspect? Can the objective and the subjective aspect be integrated when attributing degrees of subjectivity to objects? Or perhaps the degree of objectivity to subjects, as some others would propose? Can the thing, property and relation aspects be integrated when elaborating on transformations between them? Can the formal and the informal aspect be integrated when postulating an underlying common nature parts of which are formalizable while other parts are not? This is similar to Ludwig von Bertalanffy's idea concerning the use of mathematical tools in his General System Theory, [51]. Can the syntactic, semantic and pragmatic aspects be integrated when based upon a unifying semiotic theory? Can the specific aspects be integrated when resorting to evolutionary theory and identifying each information manifestation on a specific level of evolution?’

One of the explicitly dedicated approaches towards unity in diversity is that which is connected to the term “Unified Theory of Information” (UTI). While the question of whether or not a UTI is feasible was answered in a controversial way by Capurro, Fleissner and Hofkirchner [52], Fleissner and Hofkirchner tried to lay the foundations for a project of unification reconciling legitimate claims of existing information concepts underlying science and technology with those characteristic of social sciences, humanities, and arts [53,54]. They have been doing so by resorting to complex systems theory. In what follows we will return to various programs of unification and elucidate similarities and differences in their approaches.

2.1.2. What Is the Dynamics of Information?

Floridi [6] gives the following explanation:

By “dynamics of information” the definition refers to:

  • the constitution and modeling of information environments, including their systemic properties, forms of interaction, internal developments, applications, etc.;

  • information life cycles, i.e., the series of various stages in form and functional activity through which information can pass, from its initial occurrence to its final utilization and possible disappearance; and

  • computation, both in the Turing-machine sense of algorithmic processing, and in the wider sense of information processing. This is a crucial specification. Although a very old concept, information has finally acquired the nature of a primary phenomenon only thanks to the sciences and technologies of computation and ICT (Information and Communication Technologies). Computation has therefore attracted much philosophical attention in recent years.

The reader interested in the development of the field of Dynamic of Information prior to Open Problems, such as seminal work by Dretske [55] and Barwise and Seligman [56] is referred to the Philosophy of Information handbook [2], as well as [37] or [27]. Abramsky's chapter in the same Handbook connects information, process and games (representing the rules or logic) in the promising novel attempt to develop a “fully-fledged dynamical theory”.

Van Benthem's new book Logical Dynamics of Information and Interaction, [57] models information dynamics within a framework of logic developed as a theory of information-driven rational agency and intelligent interaction between information-processing agents. Van Benthem connects logic, philosophy, computer science, linguistics and game theory in a unified mathematical theory which provides dynamic logics for inference, observation and communication, with update of knowledge and revision of beliefs, changing of preferences and goals, group action and strategic interaction in games. Van Benthem's framework includes all three senses of dynamics of information on the level of human agency. From the modeling point of view nothing prevents to apply Benthem's approach to a network of simpler agents. The book includes chapters on logical dynamics, agency, and intelligent interaction; epistemic logic and semantic information; dynamic logic of public observation; multi-agent dynamic-epistemic logic; dynamics of inference and awareness; preference statics and dynamics; decisions, actions, and games; processes over time; epistemic group structure and collective agency; computation as conversation and rational dynamics in game theory. Van Benthem explores consequences of the ‘dynamic stance’ for logic as well as for cognitive science in a way which smoothly connects to the program of Philosophy of Information. [58]

On a different level of abstraction, yet another answer to the question of information dynamics is given by Mark Burgin in his article Information Dynamics in a Categorical Setting which presents “a mathematical stratum of the general theory of information based on category theory. Abstract categories allow us to develop flexible models for information and its flow, as well as for computers, networks and computation. There are two types of representation of information dynamics in categories: the categorical representation and functorial representation. Properties of these types of representations are studied. (…) Obtained results facilitate building a common framework for information and computation. Now category theory is also used as unifying framework for physics, biology, topology, and logic, as well as for the whole mathematics. This provides a base for analyzing physical and information systems and processes by means of categorical structures and methods” [50].

Similarly built on dual-aspect foundations is info-computationalism, ICON of Dodig Crnkovic [44,59-63]. It presupposes a hierarchy of levels, starting from the basic proto-information as a stuff of the universe and building a number of levels of organization in an evolutionary way, through computational processes. It relates to Floridi's program for PI [1,6,8,9,17,19,20,24,26,27], combining it with the pancomputational stance (Zuse, Fredkin, Wolfram, Chaitin, Lloyd) which takes the universe to be a computer. With the universe represented as a network of computing processes at different scales or levels of granularity, ICON sees information as a result of (natural) computation [64]. Adopting Floridi's informationalism, (Informational Structural Realism) [20] which argues for the entire existing physical universe being an informational structure, natural computation can be seen as a process governing the dynamics of information. In the ICON—a synthesis of informationalism and computationalism, information and computation are two mutually defining ideas [44].

On the level of the basic mechanism, communication is a special type of computation. Bohan Broderick [65] compares notions of computation and communication and arrives at the conclusion that they are not conceptually different. He shows how they may be distinguished if computation is limited to a process within a system and communication is an interaction between a system and its environment. Burgin [66] puts it in the following way:

“It is necessary to remark that there is an ongoing synthesis of computation and communication into a unified process of information processing. Practical and theoretical advances are aimed at this synthesis and also use it as a tool for further development. Thus, we use the word computation in the sense of information processing as a whole. Better theoretical understanding of computers, networks, and other information-processing systems will allow us to develop such systems to a higher level.”

Close to info-computationalism (ICON) is the view that conceives informational dynamics as processes of self-organization. Whenever self-organizing systems in their behavior relate to the environment, they create information, that is, they rather generate information than process it and are thus information-generating systems [67]. This concept might be called “emergent information”. The difference to info-computationalism lies in the dynamics that is assumed as background. While info-computationalism regards any natural process that can be described by a definable model as computation, which is equal to information processing, (so that e.g., the emission of a radioactive particle creates information) in the “emergent information” approach only self-organization processes are deemed to produce information.

The triple-c model developed in the context of emergent information finds information generation in a series of orderly concatenated different manifestations: First comes cognition (the first “c”) which refers to the information generation of a self-organizing system vis-á-vis its environment that is unspecified; the coupling of cognitive processes of at least two self-organizing systems yields then communication (the second “c”); and sustainable communicative processes lead to cooperation (the third “c”) of co-systems for the sake of a commonly established meta- or suprasystem of which the co-systems are elements [68]. In a less-than-strict-deterministic way cooperation feeds back to communication as communication does to cognition. That's the basic dynamics of emergent information.

In the ICON scheme, the recurrent theme is information/computation as the underlying structure/process. Information is fundamental as a basis for all knowledge and its processing characterizes all our cognitive functions. In a wider sense of proto-information it represents every physical/material phenomenon [63].

2.1.3. Is a Grand Unified Theory of Information (GUTI) Possible?

There are several approaches that make such a claim.

Among the prominent groups working on unification, the Unified Theory of Information (UTI) Research Group—Association for the Advancement of Information Sciences can be mentioned. http://uti.at/projects.html.

UTI Research Group “aims at the advancement of reflection and discourse in academia and society about the role of information, communication, media, technology, and culture in society. It works for building a better understanding and for dialogue in information science, communication and media studies, and science and technology studies (STS). It is interested in advancing critical ideas, approaches, methods, and research that are needed for establishing a global sustainable information society.”

Hofkirchner's UTI is about self-organizing systems (from the most primitive physical system to the social systems) that for themselves (in the case of cognition) or in interaction with other self-organizing systems (in the case of communication) or as part of higher-level self-organizing systems (in the case of cooperation) generate information and make use of it. And it is about artificial devices like the Turing machine computers that contribute to information generation not by organizing themselves (there is no self in the machine) but by being instrumental to the overarching social self-organization.

The info-computational framework, ICON of Dodig Crnkovic [60], characterized by two basic ontological principles: Information (structure) and computation (process), provides a unifying generative scheme for the range of phenomena from inanimate physical objects to cells, organisms, cognizing systems and ecologies offering a new conceptualization of the nature of structures and dynamics of informational phenomena. We will come back to this approach in the discussion of informational universe/nature [61]. While UTI and ICON have different starting points—UTI in humanities (which makes it have a strong socio-political focus) and ICON in natural sciences and computing (which makes it primarily interested in structures and processes at different levels of abstraction), they nevertheless converge towards compatible views.

According to the current idea of computationalism (natural computationalism, pancomputationalism), not only machines are capable of computing, but any dynamic behavior of physical systems can be interpreted as computation, including the dynamics of biological systems. See [67] on self-organizing self-star (self-* models. Here self-* stands for self-organization, self-configuration, self-optimization, self-healing, self-protection, self-explanation, and self/context-awareness—applied to information-processing systems. Scheutz in [69] argues that this new kind of computationalism applied to the theory of mind is able to explain the nature of intentionality and the origin of language. The length of this article does not permit an extensive review of Scheutz argument, but in short, the main difference between the new idea of computation (like in ICON as well) is that nature itself computes, so whatever processes are going on in our brains, they represent information processing, which is a general form of computation. It is not the same view as in old computationalism where the brain was supposed to be equivalent to Turing Machine. And it is definitely not the claim that the brain is the mind. The distinction must be made between the structure (brain) and the process (mind).

Kampis in his book Self-Modifying Systems in Biology and Cognitive Science: A New Framework for Dynamics, Information, and Complexity describes the computational nature of those systems [70] that today are part of the new organic computing field. Self-modification is one of already mentioned self-* properties of living systems.

It is important to recognize the paradigm shift in the thinking about structures and functions of living organisms that traditionally were considered to form a domain qualitatively different from technological artifacts such as computers and robots. The difference between the present-day computing and the Turing-type model of computation lies in the role of context of a given system. The Turing machine is context-independent, and computes a function in isolation from the outer world. However, self-organizing organisms are essentially open and coupled to the environment [63].

The Turing Machine model is not the most expressive model for the type of processes going on in living organisms [71]. Expressing biology in informational terms leads to increased understanding of structures in the living world as scale-independent networks. Interactions within those networks are essential for the formation and maintenance of biological structures on different levels of organization.

Burgin [37] in his new book, Theory of Information. Fundamentality, Diversity and Unification, offers an approach to unification based on a synthesis of concepts of information describing processes in nature, technology, and society, with the main insights from information theory. He calls his approach a General Theory of Information, explaining [37]:

“The general theory of information is a synthetic approach, which organizes and encompasses all main directions in information theory. It is developed on three levels: conceptual, methodological and theoretical. On the conceptual level, the concept of information is purified and information operations are separated and described. On the methodological level, it is formulated as system of principles, explaining what information is and how to measure information. On the theoretical level, mathematical models of information are constructed and studied.”

Besides General Theory of Information, Burgin's book addresses Statistical Information Theory, Semantic Information Theory, Algorithmic Information Theory, Pragmatic Information Theory and Dynamics of Information.

Though, prima facie, Brier's Cybersemiotics does not appear to be a theory of information—in particular, if you consider the subtitle of his book from 2008 which runs “Why information is not enough!”—it is, among others, an attempt to find common grounds of information processes, at least, in the living world. In a recent description Brier writes [72] (pp. 1902-1903):

“The integrative transdisciplinary synthesis of Cybersemiotics starts by accepting two major, but not fully explanatory, and very different transdisciplinary paradigms: 1. The second order cybernetic and autopoietic approach united in Luhmann's triple autopoietic system theory of social communication; 2. The Peircean phaneroscopic, triadic, pragmaticistic, evolutionary, semiotic approach to meaning, which has led to modern biosemiotics, based in a phenomenological intersubjective world of partly self-organizing triadic sign processes in an experiental meaningful world. The two are integrated by inserting the modern development of information theory and self-organizing emergent chemico-biological phenomena as an aspect of a general semiotic evolution in the Peircean framework.”

Biosemiotics, biology interpreted as sign systems study (notably Barbieri's code semiotics) is one of the necessary links in the chain of hierarchies of meaning production. On the level of an organism, Menant in [46] (p. 255) defines meaning as a consequence of the constraint to the living entity: “stay alive”. Such constraint that is to be locally satisfied by the organism goes with the process of interpretation, or meaning generation, that links the living entity to its environment.

Like the UTI and ICON approaches, Brier's Cybersemiotics is critical of mechanicism because it either neglects meaning and related phenomena or is reductionistic and levels them down. It is important to keep in mind that mechanistic approaches have been criticized even by informationalists. But unlike UTI and ICON Brier associates the mechanistic approach with the term “information”, because Shannon and Weaver, Wiener and Schrödinger's definition that in his view is prototypical for the mechanistic approach is widely accepted in natural and technical sciences [72] (p. 1914). Despite his scepticism towards informational approaches (based on Shannon), Brier construes an ontological hierarchy (“heterarchy”) of different levels across which information processes and meaning can develop [46] (p. 381): “Across levels, various forms of causationare more or less explicit (manifest). This leads to more or less explicit manifestations of information and semiotic meaning at the various levels of the world of energy and matter.” Brier argues that the foundation in system theory is not enough to explain living systems' cognitive abilities, in particular, meaning, and that Peircean semiotics is necessary. Thus very much like UTI—that represents a perspective-shifting methodology that allows for both third-person and first-person experience [73,74]—but in contradistinction to ICON—that accepts only third-person methodology—and in contradistinction to other biosemiotic threads such as Barbieri's idea of copymakers and codemakers, Cybersemiotics includes a phenomenological perspective.

The impact of those new theories on the development of Philosophy of Information will be visible in the years to come.

2.2. Information Semantics

2.2.1. The Data Grounding Problem: How Can Data Acquire Their Meaning?

Floridi, who together with Taddeo [15] contributed to the research on the data grounding problem, explains the situation in the following way: “Arguably, the frame problem (how a situated agent can represent, and interact with, a changing world satisfactorily) and its sub-problems are a consequence of the data grounding problem [Harnad 1993], Taddeo and Floridi [2005]). In more metaphysical terms, this is the problem of the semanticisation of being and it is further connected with the problem of whether information can be naturalised.” Trends in Philosophy of Information, in [2].

The data grounding problem can be related to the two kinds of information, symbolic (language) and sub-symbolic (signals) and the world as proto-information, [44,60,61,63]. Within pragmatic tradition, meaning is the result of use, or more generally, meaning is generated through the interaction of an agent with the world, including other agents [60]. An agent is defined in a generic sense: An entity is an “agent” if it has some degree of autonomy, that is, if it is distinguishable from its environment by some kind of spatial, temporal, or functional attribute and it is able to engage in tasks in an environment without direct external control. That is the definition originating in Agent Based Modeling which makes it possible to model computationally a range of agents, from viruses to interacting eco-systems.

Data semantics (as especially evident in computer science and cognitive informatics) is therefore defined by the use of the data. Symbols are grounded in sub-symbolic information through the interactions of an agent. Symbols here are defined in the sense of Harnad (1991) who uses the Chinese Room Argument (Searle 1980) to introduce the Symbol Grounding Problem.

This is in line with the praxical solution proposed by Taddeo and Floridi [15] in form of Action-based Semantics with the simple basic idea that initially, the meanings of the symbols generated by an agent are the internal states of the agent which are directly correlated with the agents actions.

On the fundamental level, quantum-informational universe performs computation on its own, Lloyd [75], Vedral [76]. Symbols appear on a much higher level of organization, and always in relation with living organisms/cognizing agents. Symbols represent something for a living organism; they have a function as carriers of meaning. See Menant's article in [50] (p. 255).

As already pointed out, there are two different types of computation and both are implemented in a physical substrate: Sub-symbolic and symbolic computation. Douglas Hofstadter has addressed the question of symbols formed by other symbols or sub-symbols in his book Gödel, Escher, Bach: An Eternal Golden Braid from 1979. Interesting to notice is that in the fields of Artificial Intelligence and Cognitive Science similar suggestions for the symbol grounding problem solutions are proposed by number of researchers, from Harnad [77] to Ziemke [78]. Smolensky and Legendre present a way of integration of connectionist (‘neural’) and symbolic computation, addressing computational, linguistic, and philosophical issues in [79].

John Mingers who had developed a theory about data, information and meaning in the 90ies [80], advised in 2001 [81] to give greater consideration to neurophysiological processes in living systems when it comes to meaning and puts emphasis on embodied cognition drawing on the concept of autopoiesis (self-organization) by Maturana and Varela. He states that it is the readiness of the nervous system that determines the response that is triggered by some external event. It's the body that unconsciously presents our conscious mind with preconfigured meanings.

Søren Brier in his The Cybersemiotic Framework as a Means to Conceptualize the Difference between Computing and Semiosis in [59] (p. 178) offers a critical view which he also defends in his book Cybersemiotics. Why Information Is Not Enough!, in which he argues that first-person semiosis cannot be captured by info-computational models alone. Semiosis is a sign process which includes production of meaning, and computation is assumed to be adequately modeled by Turing machine. However, recent developments in the fields of cognitive computing and cognitive informatics involve much more complex info-computational architectures, as discussed by Müller and Dodig-Crnkovic [61]. Computation is not based on Turing machine model, but is taken to be Natural computation, which encompasses all physical, chemical, biological and psychological as well as social processes, on different levels of organization of informational structures. Technically, the concept of virtual machine is used from theory of computation as discussed by Aaron Sloman in (17); virtual machines that can be implemented or realized in physical machines or other virtual machines.

2.2.2. Truth Problem: How Can Meaningful Data Acquire Their Truth Value?

2.2.3. Informational Semantic Problem: Can Information Theory Explain Meaning?

We discuss the above two problems together, as they are connected. Based on scientific tradition, information semantics can be related with system modeling [82] and model validity. Truth might be ascribed to meaningful data organized into information in the sense of “correct well-formed information” within a coherent theoretical framework, implying that the data are correctly obtained, transmitted and stored, that they have not been corrupted in communication or storage or used inappropriately. Such correct data might be called “true data” but that is not the usual terminology in sciences and technology.

As knowledge is constructed from information, in order to provide a guarantee for knowledge to be true, Floridi proposes a new concept of Strongly Semantic Information [11], which requires information to be true and not only well formed and meaningful data. Adriaans [83] presents an interesting critique, claiming that Floridi's theory of semantic information as well-formed, meaningful, and truthful data is “more or less orthogonal to the standard entropy-based notions of information known from physics, information theory, and computer science that all define the amount of information in a certain system as a scalar value without any direct semantic implication.” Even Scarantino and Piccinini in their article Information without Truth for the special issue of Metaphilosophy [28] remind that “the main notions of information used in cognitive science and computer science allow A to have information about the obtaining of p even when p is false.

Adriaans defends the position that “the formal treatment of the notion of information as a general theory of entropy is one of the fundamental achievements of modern science that in itself is a rich source for new philosophical reflection. This makes information theory a competitor of classical epistemology rather than a servant.” Chaitin in [84] argues for the similar position.

According to Adriaans, information theories belong to two programs, empirical/Humean school and transcendental/Kantian school. Floridi's Strongly Semantic Information belongs to the transcendental program. Empirical approaches (such as those proposed by Shannon, Gibbs and Kolmogorov) present mathematical tools for selection of “the right model given a set of observations”. While classical epistemology studies truth and justification, theory of information is based on model selection and probability. Floridi's philosophy, according to Adriaans analysis, incorporates selected notions from information theory into a classical research framework, while “information theory as a philosophical research program in the current historical situation seems much more fruitful and promising than classical epistemology.

This sounds like a convincing diagnosis. What Floridi's program finally aims at is to provide the basis for understanding of knowledge, truth and justification in terms of information (and as can be added from an info-computationalist stance, necessarily also in terms of its complementary notion of computation). At some point all high level concepts (truth, justification) will be required to be translated into low level (info-computational level); in much the same way as symbolic and subsymbolic cognition must be connected in order to be able to reconstruct the mechanisms that produce meaning.

On the other hand, Sequoiah-Grayson [85] defends Floridi's theory of Strongly Semantic Information “against recent independent objections from Fetzer and Dodig-Crnkovic. It is argued that Fetzer and Dodig-Crnkovic's objections result from an adherence to a redundant practice of analysis. (..) It is demonstrated that Fetzer and Dodig-Crnkovic fail to acknowledge that Floridi's theory of strongly semantic information captures one of our deepest and most compelling intuitions regarding informativeness as a basic notion.”

Nevertheless, even so, for Dodig-Crnkovic it seems reasonable to consequently rely on the fundamental framework of theory of information instead of a mix of classical epistemological and new information-theoretical concepts.

2.2.4. Informational Truth Theory: Can a Theory of Information Explain Truth?

Theory of information can explain truth as info-computational phenomenon, even though truth is not absolute, but represents our best present knowledge, within a given framework, as Adriaans suggests:

“Based on contributions of philosophers like Popper, Kuhn, Feyerabend, Lakatosh, Stegmuller, and Sneed in the middle of the twentieth century the common view among scientist is that scientific theories never can claim to be true definitively. What we can only do is try to find and select the best theory that fits the data so far. When new data are gathered, the current theory is either corroborated or, when the data are in conflict with the theory it has to be revised. The best we can reach in science is provisional plausibility. This is effectively the position of mitigated skepticism that is defended by Hume. This methodological position fits perfectly with the recent insights in philosophy of information, notably the theory of general induction that has been initiated by Solomonoff and his theory of algorithmic probability which is a cornerstone of modern information theory.”

Naturalized epistemology [86] and [60] describes the evolution of increasingly complex cognitive capacities in organisms as a result of interactive information processes where information is more concerned with meaning for an agent than with truth, as meaning is directly related to agency. Agents are different entities on different levels of organization, characterized by well-defined identity and mode of autonomous action—as we already mentioned in the generic definition of an agent, as used in Agent Based Modeling theory. Knowledge is typically distributed in a system of agents in a community of practice (interacting network of agents). Agency in the natural world is typically based on incomplete knowledge, where probabilities govern actions. Being internalized by an agent, data becomes information, in the context of an agent's experiences, habits and preferences. All is implemented in the agent's bodily structures (including brain, for those agents who possess it) and determines its possible interactions with the world. Adaptive structures of agents in networks act as memory of the past development, and represent their learning history. As Salthe [87] puts it: “A species' storage of historically acquired information is held in the genomes of the cells of its parts, as well as in material configurations in cell structures. At its own scale each species is unique; while at their scales, its parts (e.g., organisms) differentiate increasingly as they recover from perturbations during development, becoming ever more intensively unique.” This makes the relationship between information and meaning natural [86]. Meaning governs an intelligent agent's behavior, based upon data structured into information and further structured into knowledge that in interaction with the world results in agency. Truth as a control mechanism is arrived at first in the interaction, based on propositional knowledge, between several agents (inter-subjective consensus about knowledge) or in the relationship between different pieces of propositional knowledge that an agent possess and can reason about. In the sense of Chaitin's “truth islands” [88], some well-defined parts of reality can be organized and systematized in such a way that truth may be well-defined within those sets, via inter-agent communication. For an agent, meaning is a more fundamental phenomenon than truth, and both must be possible to express in terms of models [83]:

“Within the context of information theory, the problem of founding knowledge as true justified belief is replaced by the problem of selecting the optimal model that fits the observations.”

From the everyday experience we know that we act based on knowledge we judge as plausible and which may be true or not. The underlying fundamental debate about certainty and probability is discussed by Fallis [89] in the analysis of probabilistic proofs and the epistemic goals of mathematicians.

As uses for information can be many, in different contexts and for different agents, Allo in [90] addresses the problem of formalizing semantic information with logical pluralism taken into account. Benthem's view is that logical pluralism is one of several ways of broadening the understanding of logic and its development, [91].

2.3. Intelligence/Cognition

2.3.1. Descartes' Problem: Can Cognition Be Fully Analyzed in Terms of Information Processing at Some Level of Abstraction?

An example is Wang's Cognitive Informatics [92-96] which shows even how this can be done in practice at some level(s) of abstraction. According to its founder, Yingxu Wang, [97]:

“Cognitive Informatics (CI) is a transdisciplinary enquiry of cognitive and information sciences that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, and their engineering applications via an interdisciplinary approach.”

This transdisciplinary research builds on the results from computer science, computer/software engineering, systems science, cybernetics, cognitive science, knowledge engineering, and neuropsychology, among others. Applications of CI include cognitive computing, knowledge engineering, and software engineering. The theoretical framework of CI links the information-matter-energy model, the layered reference model of the brain, the object-attribute-relation model of information representation in the brain, natural intelligence, autonomic computing, neural informatics, human perception processes, the cognitive processes of formal inferences, and the formal knowledge systems. In order to provide coherent formal framework for CI, new descriptive mathematical formalisms of Concept Algebra, Real-Time Process Algebra and System Algebra have been developed. From Wang's work as well as van Benthem's [58] it is evident that adopting information as a new fundamental principle calls for a change in formal approaches in logic, mathematics, model-building and understanding of their cognitive functions.

On the basic level, according to the triple-C UTI model of Hofkirchner, cognition is a manifestation of information, that is, cognitive processes are those types of information processes that perform the function of relating of a self-organizing system to some event or entity in its environment. When that system enters such a relation, it generates information. It is important not to forget that “cognition” in this context is not only meant for human systems but for all living systems and material systems as long as they self-organize. The model concedes cognizability to non-human systems too, albeit in different degrees according to the evolutionary stage they represent. In terms of complexity, “cognizability” refers just to the dimension of solitary systems, that is, individual phases of metasystem transitions or elementary levels of suprasystem hierarchies.

The point, however, is that cognition in UTI is an emergent process, a less-than-deterministic process the outcome of which is the generation of information that cannot be reduced to some perturbation of the system or some input in the system or some algorithmic information processing inside the system because it constitutes a leap in quality. Thus Turing machine computation is not able to provide a model of natural or human information generation [71].

On the other hand, info-computational approach, ICON, analyzes cognition in terms of information structures and computational processes. Cognition is understood as self-organized hierarchy of information processing levels in a cognizing agent, in agreement with Maturana and Varela's view of life as cognition [98,99]. The lowest level of organization of reality is “proto-information”, the term used by Dodig Crnkovic [63] to denote the physical world as information. Naturalized epistemology argues that all cognition is embodied and all mental activity arises as an emergent phenomenon from an agent's interaction with the environment, including self-reflection and interactions with other agents. Unlike UTI, info-computationalism does not ascribe cognition to any non-living entities, not even in case of self-organizing systems (such as for example tornados). It should be pointed out that in ICON computation in general does not need to be deterministic, as in nature there are indeterministic processes as well, and they also present computation in a computational universe. Concerning continuum vs. discrete debate, it is evident that those two complementary modes of description are used on different levels of organisation, so they both are part of ICON.

2.3.2. Dennett's Reengineering Problem: Can Natural Intelligence Be Fully Analyzed in Terms of Information Processing at Some Level of Abstraction?

Even here, the natural intelligence is based on a complex hierarchy of levels of information processing architectures. Intelligence is closely related to cognition. As Maturana and Varela [98,99] argue; for a living system, to live is to cognize, and cognitive domain is the domain of states in self-organization (autopoiesis). Wang [100] defines abstract intelligence in the following way:

“In the broad sense, abstract intelligence is any human or system ability that autonomously transfers the forms of abstract information between data, information, knowledge, and behaviors in the brain or systems.”

In the field of AI, behaviors are important, so the chain data-information-knowledge (“information” here used in a restricted sense) ends with behavior and not with wisdom as was earlier proposed by Stonier [101]. Wisdom in a sense of Stonier may be interpreted as a state of information that allows for successful behavior of the human system.

One of the fundaments of intelligence is logic. As we are learning about intelligence, natural and artificial, we also learn about logic. Here is van Benthem's description of the state of the art [91]:

“Since the 1930s, modern core logic has been about at least two topics: valid inference, yes—but on a par with that, definability, language and expressive power. In fact, many of the deep results in logic are about the latter, rather than the former aspect: linked with Model Theory, not Proof Theory. And to me, that definability aspect has always been about describing the world, and once we can do that, communicating to others what we know about it. In fact, there is even a third pillar of the field, if we also count computation and Recursion Theory.”

This new emerging broader understanding of logic with “scope and agenda beyond classical foundational issues” will also contribute to the future developments in AI (Artificial Intelligence), IA (Intelligence Augmentation), Cognitive Informatics and Cognitive Computing. Anyway, from the position of UTI, doubts have to be raised whether the new scope and agenda will further these developments or rather restrict them as far as emergence in natural intelligence is concerned.

One of the important advancements in understanding of intelligence, knowledge generation and modeling is the development of generative multi-agent models. Generative models are generalizations of cellular automata to encompass agents with different individual characteristics and types of interactions, asynchronously communicating in a general topology. Those kinds of models, (which Wolfram [102] rightly characterized as a “new kind of science”) have developmental properties very useful in modeling of life phenomena. As living systems exhibit self-similar network structures from the molecular level to the level of ecology, agent modeling is the most general framework for such systems. Interesting to notice is the difference between the structural description and the dynamic description as in (multi) agent models. Even very simple structures in a course of temporal development through interactions can develop surprisingly complex patterns, and even lead computationally to randomness [103]. Among important insights learned from generative models and simulations in general are scale-independent network phenomena in living systems, directly connected to information communication among network nodes.

2.3.3. Turing's Problem: Can Natural Intelligence Be Fully and Satisfactorily Implemented Non-Biologically?

The answer to this question depends on what is meant by “natural intelligence” and “fully and satisfactorily”. If we consider a dolphin as possessing natural intelligence, which features shall we be able to reproduce in order to claim that dolphin intelligence has been implemented fully and satisfactorily? The development of AI seems to suggest that we will quite soon be able to reproduce the intelligent behavior of some simple living organisms. Projects like Blue Brain http://bluebrain.epfl.ch [104] are designed specifically to simulate natural intelligence, by reverse engineering mammalian brain, first in a rat and then in a human. (This also relates to the previous question about Dennett's reengineering problem.) The biologically accurate model of the cortex, the “grey matter” of the brain that first appeared in mammals during evolution, responsible for mental capacities such as thinking, anticipation, etc., has a fundamentally simple repetitive structure of neocortical column found in all mammals. The difference between rat brain and human brain is supposed to be basically just in the volume of cortex. The Blue Brain simulation re-creates this fundamental microcircuit of the neocortical column “down to the level of biologically accurate individual neurons”. In 2007 the Blue Brain project announced plans to model the entire human brain within the next 10 years. From the claim that the difference in the intelligence in mammalians is proportional to the volume of the cortex, one can conclude that continued increasing of the cortex of a simulated brain will result in increasing intelligence.

There is another approach, taken by Boahen at Stanford and Meier at Heidelberg, in the FACETS project (Fast Analog Computing with Emergent Transient States), which instead of simulating, emulates neurons, building “a brain on a silicon chip” in the form of hardware.

However, Wang [97] would not agree with those optimistic expectations concerning non-biological intelligence, and he presents a theorem (without proof) stating:

“The law of compatible intelligent capability states that artificial intelligence (AI) is always a subset of the natural intelligence (NI), that is: AI ⊆ NI”.

Taking the basic assumptions of complexity in the perspective of UTI [105] seriously, Wang's theorem would be supported: Natural intelligence crucially relies on emergent information, on processes that show emergence which, according to UTI, in principle cannot be incorporated in artificial intelligence.

However, one can wonder what will happen with natural intelligence augmented by increasingly advanced AI or in general with extended mind [106]. Cognitive computing devices can exceed specific human cognitive capabilities (such as logical problem solving, pattern-recognition, search, and memory). Already now there are software systems which exceed any single human's comprehension, and which augment our cognition by performing cognitive tasks for us.

It is evident that not all researchers would agree on the claim that human intelligence is the limit, and indeed from the perspective of info-computationalism, there is no fundamental reason not to exceed human level intelligence by use of cognitive machines. It will be interesting to follow the development of mentioned simulation/emulation projects aiming at (at least) human level AI.

2.3.4. The MIB (Mind-Information-Body) Problem: Can an Informational Approach Solve the Mind-body Problem?

Cognitive Informatics postulates two different essences: Matter-energy and information: “The Information-Matter-Energy-Intelligence (IME-I) model states that the natural world (NW) which forms the context of human and machine intelligence is a dual: one facet of it is the physical world (PW), and the other is the abstract world (AW), where intelligence (I) plays a central role in the transformation between information (I), matter (M), and energy (E)” [100].

The above is the classical mind-body dualism but without mystical problem of connecting the physical with the intelligence/mind. It seems evident from AI that some rudimentary intelligence can be programmed into a physical medium, and if even higher level intelligence will be possible to implement non-biologically in the near future, is a question that will be resolved empirically.

Within the info-computational framework ICON of Dodig-Crnkovic, information and matter/energy are represented by information and computation. Computation presents implementation of physical laws on an informational structure [60]. Instead of describing the world in terms of matter/energy (where energy stands for equivalent of matter) and information (which corresponds to a structuralist view of the world as consisting of stuff that changes patterns), the info-computationalist approach, makes the distinction between structure (information) and a process (computation). The mind/body problem is solved in a simple way. Mind is a process, information processing, and body is a structure (proto-information). Thus, mind is a process of natural computation that results from dynamical re-configuration/re-structuring of the information in the brain, au fait with the rest of the body which connects it with the physical world. The structure and the process are inseparably interwoven by physical laws [61].

In the emergentist UTI frame mind is an emergent evolutionary level of information manifestation [82]. Human mind is inextricably bound to the corresponding physical stratum (human body, human brain) brought about by evolution.

2.3.5. The Informational Circle: If Information Cannot Be Transcended but Can Only Be Checked against Further Information—If It Is Information All the Way up and All the Way Down—What Does This Tell Us about Our Knowledge of the World?

If we adopt Stonier's [101] view that information is structured data, and that knowledge is structured information, we may say that information is a building block in more complex structures, but the structure is what makes the difference. Informational Structural Realism, ISR is developed by Floridi [20]. If we want to understand the behavior of a living organism, we must know those structural relationships, both upwards and downwards in the complexity hierarchy.

Wang [100] argues for adding the behavior to Stoniers hierarchy:

“A key in the study of natural and artificial intelligence is the relationships between information, knowledge, and behavior. Therefore, the nature of intelligence is an ability to know and to do, posessed by both human brains and man-made systems. In this view, the major objectives of cognitive, software, and intelligence sciences are to answer:

How the three forms of cognitive entities, i.e., information, knowledge, and behavior, are transformed in the brain or a system?

What is the driving force to enable these transmissions?”

The transformation from information (in the broader sense as used in our context) of one kind, level, or quality to a higher information kind, level, or quality cannot be sought in a mechanistic process, since in a mechanistic process nothing new can emerge as the result is fully derivable from, and thus reducible to, the initial conditions and the mechanism in operation. Emergent information would point to self-organization as driving force. Humans do not only produce mechanistic systems as Turing-machine computers, but also build self-organizing systems as any kind of social systems.

The “information cannot be transcended”—situation reminds of pre-Socratic natural philosophy in which only one basic cosmological principle, quintessential substance, was sought after, with prominent representatives like Anaximander who advocated apeiron as the beginning or ultimate reality from which everything existent can be derived, and the atomist school who postulated atoms as indivisible basic elements of matter.

The philosophical study of the nature of information and its relationships to intelligence leads directly to biology, (among others molecular biology, developmental biology, computational biology, bioinformatics, neurobiology, ethology, evolutionary biology, biotechnology, biochemistry and biophysics, genetics, genomics, structural biology, systems biology) and other life sciences (such as cognitive and computational neurosciences, ecology, neuroinformatics) and similar research providing new insights from the study of living things into processes of cognition and intelligence. This process of philosophical meta-analysis must be informed by results from current research and must be accurately updated. The progress of life sciences at the moment is such that no single human can have complete insight into any broader field but his/her narrow field of specialization, which makes transdisciplinary collaboration increasingly important.

That information is always audited by information only is supported by the interminable cascade of building one metalevel after the other, viewed from the angle of “emergent information”. Here information is the self-organized relating of a system to another event or entity and every system that organizes itself is free to position itself vis-à-vis its environment, to establish a new level and thus to add another metalevel to whatever level there exists so far. An idea like this need not end up in radical constructivism, though. The system-made construction of a metalevel is always bound to the activity of a situated, embodied real-world system that engages with its environment and is capable of renewing its engagement according to the feedback it is exposed to from the environment.

2.3.6. The Information Continuum Conjecture: Does Knowledge Encapsulate Truth Because It Encapsulates Semantic Information? Should Epistemology Be Based on a Theory of Information?

If information is meant as strongly semantic information [11] then the answer should be yes, as the knowledge properly constructed from strongly semantic information, should encapsulate truth. However, concept of truth may not exist when information is used in a broad context, in which reality is an informational structure [76].

Even in the case of “information in the wild” (e.g., biological information for which truth is not well defined) it is good to base epistemology on a theory of information, as already pointed out by Adriaans, so as to get phenomenologically informed, naturalized epistemology.

Chaitin in [84] formulates Epistemology as Information Theory.

Epistemology as a part of philosophy deals with human cognition on the highest level of abstraction. On the other hand, human cognition seems to be a special case of cognition that shows evolutionary stages, it is a late product of biotic evolution on earth. From a theoretical view of information in the broad sense (among others the UTI, the ICON and, to a certain extent, Cybersemiotics) cognition can be seen as a manifestation of information, and cognitive processes in human and prehuman systems as information (generation) processes. Philosophy of Information deals with information on the highest level of abstraction. Thus it should include evolutionary thinking and it is obvious that, given that assumption, there is a continuum and epistemology can be based upon Philosophy of Information (in analogy to looking upon cognition as information process) as was initiated in Evolutionary Epistemology by Erhard Oeser [107].

2.3.7. The Semantic View of Science: Is Science Reducible to Information Modeling?

The answer depends on how we understand modeling. Information modeling is at the very heart of every empirical science. Theoretical physics, for example, uses the results of empirical models for building layers of theory upon empirical informational structures, originating in object-level information modeling. New scientific knowledge is obtained not only from empirical data but also from relating to existing theories. One can also view all theoretical work as a kind of modeling. In that case the answer would be yes, scientific knowledge is a result of information modeling even though many scientific practices do not have character of modeling—e.g., observations and measurements are fundamental interactions of intelligent agents with the environment, so they per se are not modeling, even though they are theory laden. At present we are only in the beginning of the development of automated discovery, automated knowledge mining, automated theorem proving, and similar techniques based on the idea that science might be reducible to information modeling.

As already mentioned, in order to be able to provide relevant discourse, Philosophy of Information must be informed by life sciences as well as material sciences. Discussing Informational Structural Realism leads to the discussion of different levels of organization of the physical world—from material systems like elementary particles, atoms, molecules, planets, planetary systems, galaxies and universe[s] to living systems like biomolecules, cells, organisms, ecosystems, to human societies which are the result of the natural process of self-organization, present at different spatial and temporal scales.

Exactly that connection to the up to date research is offered in the book Information and computation [50] which examines questions of knowledge (Brier), information dynamics (Burgin), mathematics as biological process (Chaitin), measurement and irreversibility (Collier), the computational content of supervenience (Cooper), mechanicist vs. info-computational world systems (Dodig-Crnkovic and Müller), computing and self-organization (Hofkirchner), information and computation in physics as explanation of cognitive paradigms (Kreinovich, Araiza), bodies informed and transformed (MacLennan), an evolutionary approach to computation, information, meaning and representation (Menant), interior grounding, reflection, and self-consciousness (Minsky), biological computing (Riofrio), super-recursive features of natural evolvability (Roglic), a modeling view of computing (Shagrir), information, for an organism or intelligent machine (Sloman), inconsistent information as a natural phenomenon (de Vey Mestdagh and Hoepman), and the algorithmic nature of the world (Zenil, Delahaye).

2.4. Informational Universe/Nature

2.4.1. Wiener's Problem: Is Information an Independent Ontological Category, Different from the Physical/Material and the Mental?

This is a question about metaphysics, and the Philosophy of Information builds on metaphysical naturalism. In order to put this view into context, it is instructive to look at the critique of present day metaphysics in the book Everything Must Go: Metaphysics Naturalized by Ladyman, Ross, Spurrett, and Collier [108], who propose a general “philosophy of nature” based on “ontic structural realism”. The universe is “nothing but processes in structural patterns all the way down” (p. 228) “From the metaphysical point of view, what exist are just real patterns” (p. 121). Understanding patterns as information, one may infer that information is a fundamental ontological category. The ontology is scale-relative. What we know about the universe is what we get from sciences, as “special sciences track real patterns” (p. 242). “Our realism consists in our claim that successful scientific practice warrants networks of mappings as identified above between the formal and the material” (p. 121). This points back to the previous question about the information modeling in science. The authors provide convincing critique against traditional analytic metaphysicians who are “still talking as if the world is individual items in causal relations, rather than processes in structural patterns all the way down”. The book defines verification in terms of information transfer, (p. 307-310) and adopts Salmon's process theory of causality in form of “information carrying”. Even though the focus of the book is to argue for naturalized metaphysics, mainly through philosophy of physics, it is compatible with the metaphysical claims of Philosophy of Information.

Information may be considered the most fundamental physical structure, as in Floridi's Informational Structural Realism [20]. It is in permanent flow, in a process of transformation, as known from physics. Von Baeyer [41] suggests that information is to replace matter/energy as the primary constitutive principle of the universe. It will provide a new basic unifying framework for describing and predicting reality in the twenty-first century. In the similar vein, Wang [97] postulates a dual-aspect reality with matter-energy and information as its basic principles:

“Information is recognized as the third essence of the natural world supplementing to matter and energy (Wang, 2003b), because the primary function of the human brain is information processing.”

Structures are outcomes and mediums of processes. If these processes are processes of self-organization in which systems relate to events or entities in the environment, the emerging relations are structures that are essentially informational. The “emergent information” view (UTI) is in that respect “emergentist monism”, as Peacocke in [49] names it, applied to information it means information is something that emerges from matter and energy. If it emerges, then matter and energy provide the necessary condition for information to come about but not a sufficient condition. That is, without matter or energy no information. In that sense, the emergentist information concept is materialistic. But it is neither materialistic in the mechanicist, reductionist sense nor idealistic, according to (UTI).

At the fundamental level, information can be said to characterize the world itself, for it is through information we gain all our knowledge—and yet we are only beginning to understand its meaning [52]. Among the unifying strategies statistical models presented in Adriaans [83] should be mentioned.

In the ICON dual-aspect theory of the physical universe, one sees information as a structure of the material world, while computation is its time-dependent evolution, the implementation of physical laws. Through biological evolution, self-organization by natural computation leads to increasingly more powerful cognitive agents. Life is cognition according to Maturana and Varela [98,99] and it produces intelligence, based on information processing. In that way, fundamental level proto-information is identical with the physical structure, while mind is a process that appears as a product of evolution in complex biological structures. Within the ICON framework [61] information is an independent ontological category, its basic structures are the fabric of the physical world and mental phenomena are natural computational processes in highly complex biological informational structures.

Physicists Zeilinger [109] and Vedral [76] suggest seeing reality and information as one. Researchers such as Chiribella (Perimeter Institute for Theoretical Physics) are working on the development of the mathematics of quantum theory completely reconstructed from a set of principles about information processing. “The key principle in our reconstruction of quantum theory is the ‘purification principle’, stating that every mixed state of a system A can be obtained as the marginal state of some pure state of a joint system AB. In other words, the principle requires that the ignorance about a part, be always compatible with the maximal knowledge of a whole.” Chiribella, A Seminar at Computing Laboratory Oxford, 18th January 2011

Biologists too, such as Kurakin [110] add to this information-based view of the Universe/Nature: “When reconceptualized in equivalent terms of self-organizing adaptive networks of energy/matter/information exchanges, complex systems of different scales appear to exhibit universal scale-invariant patterns in their organization and dynamics, suggesting the self-similarity of spatiotemporal scales and fractal organization of the living matter continuum.”

From the fields of physics and biology new insights essential for the Informational Universe/Nature may be expected in the years to come. A forthcoming issue of the journal Information is dedicated to matter/energy and information and will try to elucidate those fundamental relationships.

2.4.2. The Problem of Localization: Could Information Be Neither Here (Intelligence) Nor There (Natural World) but on the Threshold, as a Special Relation or Interface between the World and Its Intelligent Inhabitants (Constructionism)?

In the ICON framework [60] there is no Cartesian divide between body and mind. The Naturalized epistemology approach [86], conceptualizes information as both here (intelligence) and there (world) and on the threshold, as information constitutes the basic existence. Its structural changes are the results of computational processes. We have a long way to go in learning how exactly those computational processes are to be understood and simulated on different levels of organization of informational structures, but the first step is to establish the basic conceptual framework which smoothly connects natural world with intelligence [61].

On the other hand, in the Hofkirchner's UTI view, information is neither outside in the first (natural) world nor in a Platonist third world waiting for being collected, detected and received by intelligent beings nor is it something constructed by intelligent beings and residing in their second world only. In UTI, information is overarching and comprising complex material systems and their environment, connecting them by the establishment of relations between them that become manifest in a change of the structure, or the state, or the behavior of the systems. As such, information is part of the natural world but it is bound, exclusively, to the existence and activity of self-organizing systems and cannot exist without, or external to, that. Information as objective relation between a “subjective” system and its environment can become itself a trigger for another information process in which another “subjective” system relates itself to that very information via another “subjective” change in its own structure, state, or behavior. The term “subjective” is used here to characterize the spontaneity of complex systems when self-organizing and is meant to come in degrees according to evolutionary stages complex systems represent [68]. “Subjectiveness” is what is modeledmodelled by means of autonomous agents.

2.4.3. The “It from Bit” Hypothesis: Is the Universe Essentially Made of Informational Stuff, with Natural Processes, including Causation, as Special Cases of Information Dynamics?

The development in this direction can be seen in Floridi who argues for Informational Structural Realism, ISR. The fundamental claim of info-computationalism (ICON) which builds on ISR and natural computationalism, is that the universe is essentially made of informational stuff, and that computation may be seen as implementation of physical laws, that governs information dynamics.

In his “Universe from Bit” Davies [49] (pp. 65-91), argues for a shift in the sequence of mathematics, physics and information: “The traditional relationship between mathematics, physics, and information may be expressed symbolically as follows: Mathematics → Physics → Information”. “The variant I wish to explore here is to place information at the base of the explanatory scheme, thus: Information → Laws of Physics → Matter” (p. 75). The rationale is that “the laws of physics are informational statements” (p. 75). Though this might seem an ontological flaw, Davies can state at the same time that “the laws of physics are inherent in and emergent with the universe, not transcendent of it” (p. 83) because he postulates “a self-consistent loop: The laws of physics determine what can be computed, which in turn determines the informational basis of those same laws of physics” (p. 87).

The thesis that the laws of physics evolve because of, and together with, self-organizational capabilities of the systems inhabiting the universe is in compliance with the view that the laws of physics are not a given to the universe but completely of this world—a view that is shared by the emergentist variety of information concepts (UTI) which would still stick to the opposite of the saying “It from Bit”: “Bit from It”.

2.5. Values/Ethics

2.5.1. Are Computing Ethics Issues Unique or Are They Simply Moral Issues that Happen to Involve ICT? What Kind of Ethics Is CE? What Is the Contribution of CE to the Ethical Discourse?

It is interesting to follow the evolution of this question which firstly concerned the existing Computer Ethics, then Computing Ethics and finally Information Ethics, defining increasingly more abstract subject. Floridi's focus, initially on computing [3] shifted gradually towards information as the most abstract and fundamental principle and he developed Information Ethics (IE) as a part of his Philosophy of Information (PI). In what follows we comment on the developments within Information Ethics, leaving Computer Ethics/Computing Ethics as a historical origin.

Froehlich [111] introduces the history of Information Ethics by the claim: “Information ethics has grown over the years as a discipline in library and information science, but the field or the phrase has evolved and been embraced by many other disciplines.” Froelich mentions contributions by Capurro (1988), who in 1999 founded the International Center for Information Ethics, Severson (1997), Johnson (1985), Sullivan (1996), Spinello (2003) and number of others.

Starting from paper on Information Ethics [4], Floridi's research in IE [5,7,12,13,16,23,25] has attracted considerable interest in the research community and it was presented and reviewed with several occasions in special issues of journals.

Two issues of APA's Newsletter have discussed Floridi's work. In the fall of 2007, Floridi published an article in the newsletter titled Understanding Information Ethics [14]. In the next issue of the newsletter [112] a number of commentaries were published by prominent ethicists as a response to Floridi's article addressing topics such as IE as macroethics (Vaccaro), re-ontological revolution (Sullins), discursive explorations in IE (Buchanan), and problems of infosphere (Chopra). The discussion continues in the next issue [113] of the APA newsletter, with articles on metaphysical foundation for IE (Bynum), good and evil in IE (Barker), moral status of informational objects (Howlett Spence). The debate concludes by Floridi's Understanding Information Ethics: Replies to Comments, published in 2009 [22].

A special issue of the journal Ethics and Information Technology in 2008, edited by Charles Ess [114] titled “Luciano Floridi's Philosophy of Information and Information Ethics: Critical Reflections and the State of the Art”, witness about the vitality of the PI and IE research program. Dodig Crnkovic [115] addresses several critical views from [114] caused by the discussion of the role which IE plays in relation to other ethical theories. It is argued that IE is a fundamental level approach which, as an instrument of enquiry, can be used for specific purposes, and not as a replacement for all existing tools of ethical analysis. Understanding of the proper application is essential, for otherwise it would be like using a microscope to observe astronomical objects and concluding that it does not work.

Floridi's IE focuses on the fundamentally informational character of reality [20] and our interactions with it. According to Floridi, ICTs create our new informational habitat which is an abstract equivalent of an eco-system. As moral judgments vitally depend on the information about the present state and what is understood to be a desirable state of affairs, the macro-ethical behavior of networks of agents depends on mechanisms of information processing and communication. Information streams in the Infosphere can both enrich and pollute the informational environment for an agent. Those informational processes are essential in the analysis of the behavior of networks of agents, biological and artificial.

Classical ethics approaches typically look at individual (e.g., Virtue Ethics) or a group behavior (e.g., the Ethics of Rights) while IE provides a framework for an agent-based approach. It is important to notice that Floridi's Philosophy of Information with Information Ethics is a research program and not a single theory. As a macro-ethics, applicable to networks of communicating agents and at the same time giving a fundamental-level view of information patterns and processes IE can help identify general mechanisms and understand their workings. The insight into the underlying informational machinery helps to improve our analysis of ICT-supported systems. It is now possible to study the effects of different sorts of information communication, and their influence on informational networks, including the role of misinformation, disinformation, censorship (lack of information) and similar. There are many parallels between IE and environmental ethics, of which IE is a generalization, where the infosphere may be understood as our new cognitive environment. However, there is an important sense in which they differ. Environmental ethics is placed on the same “macroscopic” and everyday-life level of description, while IE is on a more abstract level of information structures and processes.

IE is likely to continue developing as one of the tools of investigation which will help improve understanding of ethical aspects of life in an increasingly densely populated infosphere. We are far from being able to reconstruct/generate/simulate the structure and behavior of an intelligent agent or a network of agents starting from proto-information as the stuff of the universe.

IE is not a machine for production of the ultimate ethical advice, [115] but a promising analytical instrument especially suitable for ethical analysis of techno-social systems with mixture of humans and artificial intelligent agents. With the development of agent models we may expect numerable new applications of IE. Floridi's comments [19] on the articles in [114] can be summarized as:

“There are, however, ‘correct accounts’ that may complement and reinforce each other, like stones in an arch” [16].

IE is far from a closed chapter in the history of ethics. Contrariwise, it is of great interest for many researchers today, and its development can be expected to contribute elucidation of number of central issues, particularly related to the systems of biological and artificial agents.

IE is manifestly a research field in its own right. It is not only an extension of classical ethical issues to the realm of the infosphere today enhanced by ICTs. The contents of the extension cannot be reduced to traditional ethics, that is, it is not possible to deduce findings in or for IE from the body of traditional ethics plus the premises that represent the conditions of the existence of ICTs.

In Hofkirchner's UTI framework, ICTs raised new issues in which philosophy has to reconcile the particular with the universal. However, it is in contest whether or not artificial devices can be regarded as informational agents that are patients, [116]. Capurro, together with a number of other ethicists, argues that ICTs acquire their meaning by the very act of being instrumental in human self-organization. And this meaning is related to the purpose for which ICTs are made and to the purpose for which they are used and to the good or evil that is associated with their non-intended consequences in social, biotic, or physical subsystems. Without their embeddedness in human self-organization, in the suprasystem of societies, they would be meaningless. And therefore it is of utmost importance to consciously and cautiously integrate ICTs in the bigger picture. However, we need not postulate intrinsic values of all the entities to be morally guided to respect them, [117] (p. 188). Self-organizing systems, including humans, are both agents and patients in varying degrees and are evaluated in informational settings by each other [117].

In summary, Information Ethics has undoubtedly established itself as a unique research field. In the future we expect IE to elaborate relationships with other ethical theories, and demonstrate applications in different contexts, especially when it comes to the blend of natural and artificial agents in techno-social informational environments far from traditional classical ethical scenarios, [115].

3. Conclusions

“In retrospect, all revolutions seem inevitable. Beforehand, all revolutions seem impossible.” McFaul M., US National Security Council, NY Times June 21, 2009.

In hindsight we can conclude that Floridi's program identified information-level approaches as the most important developments in a variety of research fields. Philosophy of Information can be seen as encompassing both Natural and Human Philosophy (Bacon's distinction), based on the results of the advances in the information studies and the development of computing technologies and theories. It thus provides an overarching view, bridging the traditional division between the knowledge of non-living and living world [63]. Among central problems of Floridi's program are the informational universe, intelligence/cognition and nature of knowledge in relation to information.

Successively, in the past decade, the information processing paradigm of cognition has gained dominance because of its ability to provide suitable framework for the interdisciplinary communication and learning about intelligent mind and agency. Despite of all impressive progress made in recent years, human mind is still to a high extent poorly understood. For the study of the nature of information it is essential to understand information processes and structures in the brain and nervous system. As “nothing makes sense in biology except in the light of evolution” [118] (p. 449), the problem of informational reconstruction of human mind is also necessarily done in the light of evolution.

In years to come we expect to see new answers to Floridi's questions about intelligence/cognition and information semantics to emerge from both empirical and theoretical work in neuroscience, cognitive informatics, natural computing/organic computing, generative modeling, bioinformatics, intelligent systems, robotics, and more.

The development of Philosophy of Information, PI can be seen as proceeding simultaneously at two levels, through the:

  • Externally driven philosophical reflection on the advances of the underlying research fields providing the best current knowledge of the informational nature of the world

  • Internally driven restructuring of the PI itself as a philosophical discipline with ability to adapt to the rapidly changing world of its subject matter.

What can we expect from the Philosophy of Information in the future?

Adriaans [83] adds to the list of remaining open problems of PI: The nature of various probability distributions that dominate logical, physical, biological end cultural domains; the interaction between information and computation; the approximation of various compression measures and the study of cognition and learning as data compression.

Van Benthem in the Philosophy of Information handbook proposes the following additions to the list of open problems: Visual and other information carriers beyond language; information and context as well as information, interaction and games [2] (p. 274).

We would suggest including the Complexity as one of the focus research areas where information is the basic concept and where essential improvement of our current understanding of systemic behavior of information can be expected. Improved understanding of information structures and dynamics in networks is important for broad range of phenomena from networks of neurons to social phenomena. Significant insight from biology [110] that

“diverse complex adaptive systems, such as proteins, cells, organisms, organizations, societies and ecosystems, all together constitute one developing, multiscale continuum-economy composed of interacting and interdependent adaptive organizational forms that co-exist and co-evolve at different spatiotemporal scales, forming a nested set of interdependent organizational hierarchies.”

points towards a natural cohesive mechanisms between different levels of abstraction and should be taken into account.

As a part of the study of the mutual relationship of information and computation, it is vital to look at the agency/behavior/computational aspects of information. This is a domain of pragmatics of information which together with syntactic aspects of information deserves special focus in the future development of PI.

Interesting to notice is the phenomenon Philosophy of Information has in common with several current research programs: The dynamical equilibrating of the building on a moving ground. It is developing a framework based on the best current knowledge from several fundamental disciplines, all of which are simultaneously undergoing paradigm shifts—from logics (changing ideas of truth, formal system, proof, identity, contradiction, temporal and dynamic logic), computing (generalized idea of computing, natural computing, organic computing), cognitive science (new insights into mechanisms of human mind and its relationships to body and environment), neuroscience (basic level understanding of neural information processing), biology/bioinformatics (computational models of basic biological processes), physics (info-computational foundations of physics), sociology (generative models of agent networks) and semiotics (meaning production in succession of levels of organization in living agents) to the changing understanding of what indeed science is and what it should be in the future.

It should be pointed out that Philosophy of Information is a vast field and in this paper we have been able only to briefly sketch some of the developments during the past ten years. Our work can only be a very modest contribution, to paraphrase Abramsky: Not least for the reason that the research field we are attempting to overview does not exist yet in a fully realized form, [2] (p. 486). Given the context of major paradigm shifts we are experiencing, Philosophy of Information should not be envisaged as an automaton producing timeless correct statements about the world, built on the ground of basic theories in the process of transition. PI can instead provide a valuable cognitive tool for understanding of informational phenomena on a more general level which we can consult instead of depending on notoriously unreliable collective intuitions. As an integrative framework PI is expected even in the future to provide dynamic epistemic support for number of research communities connected by the fundamental idea of information.

Acknowledgments

The authors would like to acknowledge the many valuable suggestions made by Mark Burgin on previous versions of the manuscript, and we also thank Luciano Floridi for helpful remarks on Ethics.

Finally, we would like to thank two anonymous reviewers for their insightful comments.

References

  1. Floridi, L. Open problems in the Philosophy of Information. Metaphilosophy 2004, 35, 554–582. [Google Scholar]
  2. van Benthem, J.; Adriaans, P. Philosophy of Information; North Holland: Amsterdam, The Netherlands, 2008. [Google Scholar]
  3. Floridi, L. Philosophy and Computing: An Introduction; Taylor & Francis, Inc.: Philadelphia, PA, USA, 1999; pp. 1–256. [Google Scholar]
  4. Floridi, L. Information ethics: On the philosophical foundation of computer ethics. Ethics Inform. Technol. 1998, 1, 33–52. [Google Scholar]
  5. Floridi, L.; Sanders, J. Artificial evil and the foundation of computer ethics. Ethics Inform. Technol. 2001, 3, 55–66. [Google Scholar]
  6. Floridi, L. What is the Philosophy of Information? Metaphilosophy 2002, 33, 123–145. [Google Scholar]
  7. Floridi, L.; Sanders, J. Mapping the foundationalist debate in computer ethics. Ethics Inform. Technol. 2002, 4, 1–9. [Google Scholar]
  8. Floridi, L. Two approaches to the Philosophy of Information. Mind Mach. 2003, 13, 459–469. [Google Scholar]
  9. Floridi, L. Blackwell Guide to the Philosophy of Computing and Information; John Wiley and Sons Ltd.: Oxford, UK, 2003; pp. 1–392. [Google Scholar]
  10. Floridi, L. Informational Realism. In Selected Papers from Conference on Computers and philosophy; Australian Computer Society, Inc.: Darlinghurst, Australia, 2003; Volume 37, pp. 7–12. [Google Scholar]
  11. Floridi, L. Outline of a theory of strongly semantic information. Mind Mach. 2004, 14, 197–221. [Google Scholar]
  12. Floridi, L.; Sanders, J. On the Morality of Artificial Agents. Mind Mach. 2004, 14, 349–379. [Google Scholar]
  13. Floridi, L. Information ethics, its nature and scope. Comput. Soc. 2006, 36, 21–36. [Google Scholar]
  14. Floridi, L. Understanding Information Ethics. APA Newslett. Philos. Comput. 2007, 7, 3–10. [Google Scholar]
  15. Taddeo, M.; Floridi, L. A Praxical solution of the symbol grounding problem. Mind Mach. 2007, 17, 369–389. [Google Scholar]
  16. Floridi, L. Information ethics: A reappraisal. Ethics Inform. Technol. 2008, 10, 189–204. [Google Scholar]
  17. Floridi, L. Philosophy of Computing and Information: 5 Questions; Automatic Press: Birkerød, Denmark, 2008; pp. 1–204. [Google Scholar]
  18. Floridi, L. The method of levels of abstraction. Mind Mach. 2008, 18, 303–329. [Google Scholar]
  19. Floridi, L. Information ethics: A reappraisal. Ethics Inform. Technol. 2008, 10, 189–204. [Google Scholar]
  20. Floridi, L. A defence of informational structural realism. Synthese 2008, 161, 219–253. [Google Scholar]
  21. Floridi, L. The information society and its philosophy: Introduction to the special issue on “the Philosophy of Information, its Nature, and future developments”. Inform. Soc. 2009, 25, 153–158. [Google Scholar]
  22. Floridi, L. Understanding information ethics: Replies to comments. APA Newslett. Philos. Comput. 2009, 8, 4–11. [Google Scholar]
  23. Turilli, M.; Floridi, L. The ethics of information transparency. Ethics Inform. Technol. 2009, 11, 105–112. [Google Scholar]
  24. Floridi, L. The Philosophy of Information as a Conceptual Framework. Knowl. Technol. Policy 2010, 23, 253–281. [Google Scholar]
  25. Floridi, L. The Cambridge Handbook of Information and Computer Ethics; Cambridge University Press: Cambridge, UK, 2010; pp. 1–344. [Google Scholar]
  26. Floridi, L. Information: A Very Short Introduction; Oxford University Press: Oxford, UK, 2010; pp. 1–152. [Google Scholar]
  27. Floridi, L. The Philosophy of Information; Oxford University Press: Oxford, UK, 2011; pp. 1–432. [Google Scholar]
  28. Allo, P. Putting information first: Luciano Floridi and the philosophy of information. Metaphilosophy 2010, 41, 247–253. [Google Scholar]
  29. Demir, H. The fourth revolution: Philosophical foundations and technological implications. Knowl. Technol. Policy. 2010, 23, 1–6. [Google Scholar]
  30. van Benthem, J.; van Rooy, R. Connecting the different faces of information. J. Logic Lang. Inform. 2003, 12, 375–379. [Google Scholar]
  31. Hjørland, B. The concept of information. Annu. Rev. Inform. Sci. Tech. 2003, 37, 343–411. [Google Scholar]
  32. Addis, T.; Visscher, B.F.; Billinge, D.; Gooding, D. Socially Sensitive Computing: A Necessary Paradigm Shift for Computer Science. In The Grand Challenge of Non-Classical Computation; CPHC: Newcastle, UK, 2005; pp. 1–19. [Google Scholar]
  33. Luhn, G. Towards an Ontology of Information and succeeding Fundamentals in Computing Science. Triple-C 2011, 9. (in print). [Google Scholar]
  34. Sommaruga, G. One or many concepts of information? Lect. Notes Comput. Sci 2009, 5363, 253–267. [Google Scholar]
  35. Marijuán, P.C.; Lin, S.K. Papers from the foundations of information science 2002 (FIS 2002). Entropy 2003, 5, 1–2. [Google Scholar]
  36. Díaz Nafria, M.J.; Salto Alemany, F. What is really information? An interdisciplinary approach. Triple C 2009, 7, i–vi. [Google Scholar]
  37. Burgin, M. Theory of Information: Fundamentality, Diversity and Unification; World Scientific: Singapore, 2010; pp. 1–400. [Google Scholar]
  38. Hofstadter, D. Metamagical Themas: Questing for the Essence of Mind and Pattern; Basic Books: New York, NY, USA, 1985; pp. 1–880. [Google Scholar]
  39. Dodig Crnkovic, G. The Cybersemiotics and Info-computationalist research programmes as platforms for knowledge production in organisms and machines. Entropy 2010, 12, 4878–4901. [Google Scholar]
  40. Lyre, H. Informationstheorie: Eine Philosophisch-Naturwissenschaftliche Einführung; Wilhelm Fink Verlag: Munich, Germany, 2002; pp. 1–279. [Google Scholar]
  41. von Baeyer, H. Information: The New Language of Science; Harvard University Press: Cambridge, MA, USA, 2004. [Google Scholar]
  42. Roederer, J. Information and Its Role in Nature; Springer: Berlin, Germany, 2005. [Google Scholar]
  43. Seife, C. Decoding the Universe: How the New Science of Information is Explaining Everything in the Cosmos, from Our Brains to Black Holes; Viking: New York, NY, USA, 2006. [Google Scholar]
  44. Dodig Crnkovic, G. Investigations into Information Semantics and Ethics of Computing; Mälardalen University Press: Västerås, Sweden, 2006; pp. 1–133. [Google Scholar]
  45. Muller, S. Asymmetry the Foundation of Information; Springer: New York, NY, USA, 2007. [Google Scholar]
  46. Brier, S. Cybersemiotics: Why Information Is Not Enough! University of Toronto Press: Toronto, Canada, 2008; pp. 1–544. [Google Scholar]
  47. Kauffman, S.; Logan, R.; Este, R.; Goebel, R.; Hobill, D.; Shmulevich, I. Propagating organization: An enquiry. Biol. Philos. 2008, 23, 27–45. [Google Scholar]
  48. Hofkirchner, W. A unified theory of information: An outline. Bitrumagora 2010, 64. [Google Scholar]
  49. Davies, P.; Gregersen, N.H. Information and the Nature of Reality from Physics to Metaphysics; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  50. Dodig Crnkovic, G.; Burgin, M. Information and Computation; World Scientific: Singapore, 2010; pp. 1–350. [Google Scholar]
  51. Schafranek, M. General System Theory. In Handbook of the Philosophy of Science, 10th ed.; Hooker, C., Gabbay, D.M., Thagard, P., Woods, J., Eds.; Elsevier: North Holland, The Netherlands, 2011. [Google Scholar]
  52. Capurro, R.; Fleissner, P.; Hofkirchner, W. Is a unified theory of information feasible? A trialogue. In The Quest for a Unified Theory of Information; Hofkirchner, W., Ed.; Gordon and Breach: Amsterdam, The Netherlands, 1999; pp. 9–30. [Google Scholar]
  53. Fleissner, P.; Hofkirchner, W. Emergent information. Towards a unified information theory. BioSystems 1996, 38, 243–248. [Google Scholar]
  54. Fleissner, P.; Hofkirchner, W. Actio non est Reactio: An extension of the concept of causality towards Phenomena of Information. In The Quest for a Unified Theory of Information; Hofkirchner, W., Ed.; Gordon and Breach: Amsterdam, The Netherlands, 1999; pp. 197–214. [Google Scholar]
  55. Dretske, F. Knowledge and the Flow of Information; Cambridge University Press: New York, NY, USA, 1999; pp. 1–288. [Google Scholar]
  56. Barwise, J.; Seligman, J. Information Flow: the Logic of Distributed Systems; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  57. van Benthem, J. Logical Dynamics of Information and Interaction; Cambridge University Press: Cambridge, UK, 2011; pp. 1–384. [Google Scholar]
  58. van Benthem, J. Logic and the dynamics of information. Mind Mach. 2003, 13, 503–519. [Google Scholar]
  59. Dodig Crnkovic, G.; Stuart, S. Computation, Information, Cognition: The Nexus and the Liminal; Cambridge Scholars Publishing: Newcastle, UK, 2007; pp. 1–380. [Google Scholar]
  60. Dodig Crnkovic, G. Information and Computation Nets: Investigations into Info-Computational World; Vdm Verlag: Saarbrucken, Germany, 2009; pp. 1–96. [Google Scholar]
  61. Dodig Crnkovic, G.; Mueller, V. A Dialogue Concerning Two World Systems: Info-Computational vs. Mechanistic. In Information and Computation; Dodig Crnkovic, G., Burgin, M., Eds.; World Scientific Publishing Co. Inc.: Singapore, 2009; pp. 149–184. [Google Scholar]
  62. Dodig Crnkovic, G. Biological Information and Natural Computation in Thinking Machines and the Philosophy of Computer Science: Concepts and Principles; Vallverdú, J., Ed.; Information Science Reference: Hershey, PA, USA, 2010. [Google Scholar]
  63. Dodig Crnkovic, G. The Cybersemiotics and Info-computationalist research programmes as platforms for knowledge production in organisms and machines. Entropy 2010, 12, 878–901. [Google Scholar]
  64. Dodig Crnkovic, G. Info-Computational Philosophy of Nature: An Informational Universe with Computational Dynamics. In Festschrift for Søren Brier; Sørensen, B., Cobley, P., Thellefsen, T., Eds.; CBS University Press: New York, NY, USA, 2011. (in review) [Google Scholar]
  65. Bohan Broderick, P. On Communication and Computation. Mind Mach. 2004, 14, 1–19. [Google Scholar]
  66. Burgin, M. Super-Recursive Algorithms Monographs; Springer-Verlag New York Inc.: New York, NY, USA, 2004; pp. 1–320. [Google Scholar]
  67. Babaoğlu, O. Self-Star Properties in Complex Information Systems Conceptual and Practical Foundations; Springer: New York, NY, USA, 2005. [Google Scholar]
  68. Hofkirchner, W. Twenty Questions about a Unified Theory of Information: A Short Exploration into Information from a Complex Systems View; Emergent Publications: Litchfield Park, AZ, USA, 2010. [Google Scholar]
  69. Scheutz, M. Computationalism New Directions; MIT Press: Cambridge, MA, USA, 2002; pp. 1–223. [Google Scholar]
  70. Kampis, G. Self-modifying Systems in Biology and Cognitive Science: A New Framework for Dynamics, Information, and Complexity, 1st ed.; Pergamon Press: Amsterdam, The Netherlands, 1991; pp. 1–564. [Google Scholar]
  71. Dodig Crnkovic, G. Significance of models of computation from turing model to natural computation. Mind. Mach. 2011, 21, 301–322. [Google Scholar]
  72. Brier, S. Cybersemiotics: An evolutionary world view going beyond entropy and information into the question of meaning. Entropy 2010, 12, 1902–1920. [Google Scholar]
  73. Hofkirchner, W. Emergent Information: An Outline Unified Theory of Information Framework; World Scientific Publishing Co.: Hackensack, NJ, USA, 2011. [Google Scholar]
  74. Hofkirchner, W. Four ways of thinking in information. Triple-C 2011, 9. (in press). [Google Scholar]
  75. Lloyd, S. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos, 1st ed.; Knopf: New York, NY, USA, 2006. [Google Scholar]
  76. Vedral, V. Decoding Reality: The Universe as Quantum Information; Oxford University Press: Oxford, UK, 2010; pp. 1–240. [Google Scholar]
  77. Harnad, S. The symbol grounding problem. Physica D 1991, 42, 335–346. [Google Scholar]
  78. Ziemke, T. Rethinking grounding. In Does Representation Need Reality? Proceedings of the International Conference ‘New Trends in Cognitive Science'’ (NTCS'97), Vienna, Austria, May 1997.
  79. Smolensky, P.; Legendre, G. The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar; MIT Press: Cambridge, MA, USA, 2006; pp. 1–904. [Google Scholar]
  80. Mingers, J. Information and meaning: Foundations for an intersubjective account. Inform. Syst. J. 1995, 5, 285–306. [Google Scholar]
  81. Mingers, J. Embodying information systems: The contribution of phenomenology. Inform. Organ 2001, 11, 103–128. [Google Scholar]
  82. Dodig Crnkovic, G. Empirical modeling and information semantics. Mind Soc. 2008, 7, 157–166. [Google Scholar]
  83. Adriaans, P. A critical analysis of floridi's theory of semantic information. Knowl. Technol. Policy 2010, 23, 41–56. [Google Scholar]
  84. Chaitin, G. Epistemology as Information Theory: From Leibniz to Ω. In Computation, Information, Cognition-The Nexus and the Liminal; Dodig Crnkovic, G., Ed.; Cambridge Scholars Publishing: Newcastle, UK, 2007; pp. 2–17. [Google Scholar]
  85. Sequoiah-Grayson, S. The Metaphilosophy of Information. Mind. Mach. 2007, 17, 331–344. [Google Scholar]
  86. Dodig Crnkovic, G. Epistemology naturalized: The Info-Computationalist approach. APA Newslett. Philos. Comput. 2007, 6, 9–14. [Google Scholar]
  87. Salthe, S. Development (and evolution) of the universe. Found. Sci. 2010, 15, 357–367. [Google Scholar]
  88. Chaitin, G. Dijon Lecture. 2003. Available online: http://www.cs.auckland.ac.nz/CDMTCS/chaitin (accessed on 11 May 2011). [Google Scholar]
  89. Fallis, D. What do mathematicians want? Probabilistic proofs and the epistemic goals of mathematicians. Logique Anal. 2002, 45, 373–388. [Google Scholar]
  90. Allo, P. Logical pluralism and semantic information. J. Philos. Logic 2007, 36, 659–694. [Google Scholar]
  91. van Benthem, J. Logical pluralism meets logical dynamics? Aust. J. Logic 2008, 6, 182–209. [Google Scholar]
  92. Wang, Y. On Abstract intelligence: Toward a unifying theory of natural, artificial, machinable, and computational intelligence. Int. J. Software Sci. Comput. Intell. 2009, 1, 1–17. [Google Scholar]
  93. Wang, Y. Transactions on Computational Science V; Gavrilova, M.L., Tan, C.J.K., Wang, Y., Chan, K.C.C., Eds.; Springer: Berlin, Germany, 2009; pp. 1–19. [Google Scholar]
  94. Wang, Y. Toward a formal knowledge system theory and its cognitive informatics foundations. Trans. Comput. Sci. 2009, 5, 1–19. [Google Scholar]
  95. Wang, Y. On contemporary denotational mathematics for computational intelligence. Trans. Comput. Sci. 2008, 2, 6–29. [Google Scholar]
  96. Wang, Y.; Kinsner, W.; Zhang, D. Contemporary cybernetics and its facets of cognitive informatics and computational intelligence. IEEE Trans. Syst. Man Cybern. B 2009, 39, 823–833. [Google Scholar]
  97. Wang, Y. The theoretical framework of cognitive informatics. Int. J. Cognit. Inform. Nat. Intell. 2007, 1, 1–27. [Google Scholar]
  98. Maturana, H. Biology of Cognition; Defense Technical Information Center: Ft. Belvoir, VA, USA, 1970. [Google Scholar]
  99. Maturana, H.; Varela, F. Autopoiesis and Cognition: The Realization of the Living, 1st ed.; D. Reidel Publishing Co.: Dordrecht, The Netherlands, 1980. [Google Scholar]
  100. Wang, Y. On abstract intelligence: toward a unifying theory of natural, artificial, machinable, and computational intelligence. Int. J. Software Sci. Comput. Intell. 2009, 1, 1–17. [Google Scholar]
  101. Stonier, T. Information and Meaning: An Evolutionary Perspective; Springer: New York, NY, USA, 1997. [Google Scholar]
  102. Wolfram, S. A New Kind of Science; Wolfram Media: Champaign, IL, USA, 2002. [Google Scholar]
  103. Zenil, H. Randomness through Computation: Some Answers, More Questions; World Scientific: Singapore, 2011. [Google Scholar]
  104. Markram, H. The blue brain project. Nat. Rev. Neurosci. 2006, 7, 153–160. [Google Scholar]
  105. Hofkirchner, W. Projekt Eine Welt: Kognition-Kommunikation-Kooperation: Versuch u̇ber die Selbstorganisation der Informationsgesellschaft; Lit: Münster, Germany, 2002. [Google Scholar]
  106. Clark, A.; Chalmers, D. The extended mind. Analysis 1998, 58, 7–19. [Google Scholar]
  107. Oeser, E. Wissenschaft und Information: Systematische Grundlagen einer Theorie der Wissenschaftsentwicklung; Oldenbourg: Wien, Austria, 1976. [Google Scholar]
  108. Ladyman, J.; Ross, D.; Spurrett, D.; Collier, J. Everything Must Go: Metaphysics Naturalised; Clarendon Press: Oxford, UK, 2007; pp. 1–368. [Google Scholar]
  109. Zeilinger, A. The message of the quantum. Nature 2005, 438, 743. [Google Scholar]
  110. Kurakin, A. Scale-free flow of life: On the biology, economics, and physics of the cell. Theor. Biol. Med. Model. 2009, 6, 6. [Google Scholar]
  111. Froehlich, T. A Brief History of Information Ethics; Facultat de Biblioteconomia i Documentació: Universitat de Barcelona, Spain, 2004. [Google Scholar]
  112. Boltuc, P. APA Newsletters; Springer: New York, NY, USA, 2008; Volume 07, Number 2. [Google Scholar]
  113. Boltuc, P. APA Newsletters; Springer: New York, NY, USA, 2008; Volume 08, Number 1. [Google Scholar]
  114. Ess, C. Luciano Floridi's philosophy of information and information ethics: Critical reflections and the state of the art. Ethics Inform. Technol. 2008, 10, 89–96. [Google Scholar]
  115. Dodig Crnkovic, G. Floridi's Information Ethics as Macro-Ethics and Info-Computational Agent-Based Models. In Luciano Floridi's Philosophy of Technology: Critical Reflections—Philosophy and Engineering Series; Demir, H., Ed.; Springer: Berlin, Germany, 2011. [Google Scholar]
  116. Capurro, R. Towards a Comparative Theory of Agents. In Contributions to Angeletics; Rafael, C., John, H., Eds.; Fink Verlag: Munich, Germany, 2011. [Google Scholar]
  117. Hofkirchner, W. How to design the infosphere: The fourth revolution, the management of the life cycle of information, and information ethics as a macroethics. Knowl. Technol. Policy 2010, 23, 177–192. [Google Scholar]
  118. Dobzhansky, T. Biology, molecular and organismic. Am. Zool. 1964, 4, 443–452. [Google Scholar]

Share and Cite

MDPI and ACS Style

Dodig Crnkovic, G.; Hofkirchner, W. Floridi’s “Open Problems in Philosophy of Information”, Ten Years Later. Information 2011, 2, 327-359. https://doi.org/10.3390/info2020327

AMA Style

Dodig Crnkovic G, Hofkirchner W. Floridi’s “Open Problems in Philosophy of Information”, Ten Years Later. Information. 2011; 2(2):327-359. https://doi.org/10.3390/info2020327

Chicago/Turabian Style

Dodig Crnkovic, Gordana, and Wolfgang Hofkirchner. 2011. "Floridi’s “Open Problems in Philosophy of Information”, Ten Years Later" Information 2, no. 2: 327-359. https://doi.org/10.3390/info2020327

Article Metrics

Back to TopTop