Special Issue "What Is Information?"

Quicklinks

A special issue of Information (ISSN 2078-2489).

Deadline for manuscript submissions: closed (1 December 2010)

Special Issue Editor

Guest Editor
Dr. Mark Burgin

Department of Mathematics, UCLA, Box 951555, Los Angeles, CA 90095-1555, USA
Website | E-Mail
Interests: information theory; communication theory and technology; algorithmic information; information science; theory of knowledge; information processing systems and technology; theory of algorithms, automata and computation; complexity; knowledge management; theory of technology; cognition and epistemology; software engineering; schema theory

Published Papers (10 papers)

View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Finding Emotional-Laden Resources on the World Wide Web
Information 2011, 2(1), 217-246; doi:10.3390/info2010217
Received: 1 December 2010 / Revised: 24 January 2011 / Accepted: 18 February 2011 / Published: 2 March 2011
Cited by 2 | PDF Full-text (853 KB) | HTML Full-text | XML Full-text
Abstract
Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide
[...] Read more.
Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE), is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized) or an inverse-logistical shape (two or three emotions were recognized). Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions) and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions), displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service. Full article
(This article belongs to the Special Issue What Is Information?)
Open AccessArticle Accuracy in Biological Information Technology Involves Enzymatic Quantum Processing and Entanglement of Decohered Isomers
Information 2011, 2(1), 166-194; doi:10.3390/info2010166
Received: 10 August 2010 / Revised: 5 January 2011 / Accepted: 3 February 2011 / Published: 25 February 2011
Cited by 4 | PDF Full-text (752 KB) | HTML Full-text | XML Full-text
Abstract
Genetic specificity information “seen by” the transcriptase is in terms of hydrogen bonded proton states, which initially are metastable amino (–NH2) and, consequently, are subjected to quantum uncertainty limits. This introduces a probability of arrangement, keto-amino → enol-imine, where product
[...] Read more.
Genetic specificity information “seen by” the transcriptase is in terms of hydrogen bonded proton states, which initially are metastable amino (–NH2) and, consequently, are subjected to quantum uncertainty limits. This introduces a probability of arrangement, keto-amino → enol-imine, where product protons participate in coupled quantum oscillations at frequencies of ~ 1013 s−1 and are entangled. The enzymatic ket for the four G′-C′ coherent protons is │ψ > = α│+ − + − > + β│+ − − + > + γ│− + + − > + δ│− + − + >. Genetic specificities of superposition states are processed quantum mechanically, in an interval ∆t < < 10−13 s, causing an additional entanglement between coherent protons and transcriptase units. The input qubit at G-C sites causes base substitution, whereas coherent states within A-T sites cause deletion. Initially decohered enol and imine G′ and *C isomers are “entanglement-protected” and participate in Topal-Fresco substitution-replication which, in the 2nd round of growth, reintroduces the metastable keto-amino state. Since experimental lifetimes of metastable keto-amino states at 37 °C are ≥ ~3000 y, approximate quantum methods for small times, t < ~100 y, yield the probability, P(t), of keto-amino → enol-imine as Pρ(t) = ½ (γρ/ħ)2 t2. This approximation introduces a quantum Darwinian evolution model which (a) simulates incidence of cancer data and (b) implies insight into quantum information origins for evolutionary extinction. Full article
(This article belongs to the Special Issue What Is Information?)
Open AccessArticle On Quantifying Semantic Information
Information 2011, 2(1), 61-101; doi:10.3390/info2010061
Received: 17 November 2010 / Revised: 10 January 2011 / Accepted: 17 January 2011 / Published: 18 January 2011
Cited by 5 | PDF Full-text (368 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly
[...] Read more.
The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly semantic information. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semantic information. Firstly, a couple of approaches to measure truthlikeness are drawn from the literature and explored, with a focus on their applicability to semantic information quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made. Full article
(This article belongs to the Special Issue What Is Information?)
Open AccessArticle Empirical Information Metrics for Prediction Power and Experiment Planning
Information 2011, 2(1), 17-40; doi:10.3390/info2010017
Received: 8 October 2010 / Revised: 30 November 2010 / Accepted: 21 December 2010 / Published: 11 January 2011
PDF Full-text (453 KB) | HTML Full-text | XML Full-text
Abstract
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of
[...] Read more.
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target), independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning.     Full article
(This article belongs to the Special Issue What Is Information?)
Open AccessArticle Information Operators in Categorical Information Spaces
Information 2010, 1(2), 119-152; doi:10.3390/info1020119
Received: 4 October 2010 / Revised: 28 October 2010 / Accepted: 29 October 2010 / Published: 18 November 2010
Cited by 2 | PDF Full-text (223 KB) | HTML Full-text | XML Full-text
Abstract
The general theory of information (GTI) is a synthetic approach, which reveals the essence of information, organizing and encompassing all main directions in information theory. On the methodological level, it is formulated as system of principles explaining what information is and how to
[...] Read more.
The general theory of information (GTI) is a synthetic approach, which reveals the essence of information, organizing and encompassing all main directions in information theory. On the methodological level, it is formulated as system of principles explaining what information is and how to measure information. The goal of this paper is the further development of a mathematical stratum of the general theory of information based on category theory. Abstract categories allow us to construct flexible models for information and its flow. Now category theory is also used as unifying framework for physics, biology, topology, and logic, as well as for the whole mathematics, providing a base for analyzing physical and information systems and processes by means of categorical structures and methods. There are two types of representation of information dynamics, i.e., regularities of information processes, in categories: the categorical representation and functorial representation. Here we study the categorical representations of information dynamics, which preserve internal structures of information spaces associated with infological systems as their state/phase spaces. Various relations between information operators are introduced and studied in this paper. These relations describe intrinsic features of information, such as decomposition and complementarity of information, reflecting regularities of information processes. Full article
(This article belongs to the Special Issue What Is Information?)
Open AccessArticle Information: A Conceptual Investigation
Information 2010, 1(2), 74-118; doi:10.3390/info1020074
Received: 1 July 2010 / Revised: 4 August 2010 / Accepted: 20 September 2010 / Published: 22 October 2010
Cited by 7 | PDF Full-text (282 KB) | HTML Full-text | XML Full-text
Abstract
This paper is devoted to a study of the concept of information. We first situate the concept of information within the context of other philosophical concepts. However, an analysis of the concept of knowledge turns out to be the key when clarifying the
[...] Read more.
This paper is devoted to a study of the concept of information. We first situate the concept of information within the context of other philosophical concepts. However, an analysis of the concept of knowledge turns out to be the key when clarifying the concept of information. Our investigations produce the ‘missing link’ for the “severely neglected connection between theories of information and theories of knowledge” (Capurro/Hjørland). The results presented here clarify what information is and have the potential to provide answers to several of Floridi’s “open problems in the philosophy of information”. Full article
(This article belongs to the Special Issue What Is Information?)
Figures

Open AccessArticle A Paradigm Shift in Biology?
Information 2010, 1(1), 28-59; doi:10.3390/info1010028
Received: 16 July 2010 / Accepted: 6 September 2010 / Published: 13 September 2010
Cited by 4 | PDF Full-text (908 KB) | HTML Full-text | XML Full-text
Abstract
All new developments in biology deal with the issue of the complexity of organisms, often pointing out the necessity to update our current understanding. However, it is impossible to think about a change of paradigm in biology without introducing new explanatory mechanisms. I
[...] Read more.
All new developments in biology deal with the issue of the complexity of organisms, often pointing out the necessity to update our current understanding. However, it is impossible to think about a change of paradigm in biology without introducing new explanatory mechanisms. I shall introduce the mechanisms of teleonomy and teleology as viable explanatory tools. Teleonomy is the ability of organisms to build themselves through internal forces and processes (in the expression of the genetic program) and not external ones, implying a freedom relative to the exterior; however, the organism is able to integrate internal and external constraints in a process of co-adaptation. Teleology is that mechanism through which an organism exercises an informational control on another system in order to establish an equivalence class and select some specific information for its metabolic needs. Finally, I shall examine some interesting processes in phylogeny, ontogeny, and epigeny in which these two mechanisms are involved. Full article
(This article belongs to the Special Issue What Is Information?)
Open AccessArticle New Information Measures for the Generalized Normal Distribution
Information 2010, 1(1), 13-27; doi:10.3390/info1010013
Received: 24 June 2010 / Revised: 4 August 2010 / Accepted: 5 August 2010 / Published: 20 August 2010
Cited by 3 | PDF Full-text (576 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution,
[...] Read more.
We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution, and plays an important role for the introduced generalized information measure. These generalized entropy type measures of information are also evaluated and presented. Full article
(This article belongs to the Special Issue What Is Information?)

Review

Jump to: Research

Open AccessReview Information as a Manifestation of Development
Information 2011, 2(1), 102-116; doi:10.3390/info2010102
Received: 4 November 2010 / Revised: 23 December 2010 / Accepted: 19 January 2011 / Published: 21 January 2011
Cited by 3 | PDF Full-text (162 KB) | HTML Full-text | XML Full-text
Abstract
Information manifests a reduction in uncertainty or indeterminacy. As such it can emerge in two ways: by measurement, which involves the intentional choices of an observer; or more generally, by development, which involves systemically mutual (‘self-organizing’) processes that break symmetry. The developmental emergence
[...] Read more.
Information manifests a reduction in uncertainty or indeterminacy. As such it can emerge in two ways: by measurement, which involves the intentional choices of an observer; or more generally, by development, which involves systemically mutual (‘self-organizing’) processes that break symmetry. The developmental emergence of information is most obvious in ontogeny, but pertains as well to the evolution of ecosystems and abiotic dissipative structures. In this review, a seminal, well-characterized ontogenetic paradigm—the sea urchin embryo—is used to show how cybernetic causality engenders the developmental emergence of biological information at multiple hierarchical levels of organization. The relevance of information theory to developmental genomics is also discussed. Full article
(This article belongs to the Special Issue What Is Information?)
Figures

Open AccessReview Application of Information—Theoretic Concepts in Chemoinformatics
Information 2010, 1(2), 60-73; doi:10.3390/info1020060
Received: 1 September 2010 / Revised: 26 September 2010 / Accepted: 16 October 2010 / Published: 20 October 2010
Cited by 3 | PDF Full-text (335 KB) | HTML Full-text | XML Full-text
Abstract
The use of computational methodologies for chemical database mining and molecular similarity searching or structure-activity relationship analysis has become an integral part of modern chemical and pharmaceutical research. These types of computational studies fall into the chemoinformatics spectrum and usually have large-scale character.
[...] Read more.
The use of computational methodologies for chemical database mining and molecular similarity searching or structure-activity relationship analysis has become an integral part of modern chemical and pharmaceutical research. These types of computational studies fall into the chemoinformatics spectrum and usually have large-scale character. Concepts from information theory such as Shannon entropy and Kullback-Leibler divergence have also been adopted for chemoinformatics applications. In this review, we introduce these concepts, describe their adaptations, and discuss exemplary applications of information theory to a variety of relevant problems. These include, among others, chemical feature (or descriptor) selection, database profiling, and compound recall rate predictions. Full article
(This article belongs to the Special Issue What Is Information?)
Figures

Journal Contact

MDPI AG
Information Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
information@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Information
Back to Top