What Is Information?

A special issue of Information (ISSN 2078-2489).

Deadline for manuscript submissions: closed (1 December 2010) | Viewed by 81382

Special Issue Editor

Department of Mathematics, University of California, Box 951555, Los Angeles, CA 90095, USA
Interests: information theory; communication theory and technology; algorithmic information; information science; theory of knowledge; information processing systems and technology; theory of algorithms, automata and computation; complexity; knowledge management; theory of technology; cognition and epistemology; software engineering; schema theory
Special Issues, Collections and Topics in MDPI journals

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

853 KiB  
Article
Finding Emotional-Laden Resources on the World Wide Web
by Kathrin Knautz, Diane Rasmussen Neal, Stefanie Schmidt, Tobias Siebenlist and Wolfgang G. Stock
Information 2011, 2(1), 217-246; https://doi.org/10.3390/info2010217 - 02 Mar 2011
Cited by 31 | Viewed by 9579
Abstract
Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide [...] Read more.
Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE), is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized) or an inverse-logistical shape (two or three emotions were recognized). Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions) and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions), displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

752 KiB  
Article
Accuracy in Biological Information Technology Involves Enzymatic Quantum Processing and Entanglement of Decohered Isomers
by Willis Grant Cooper
Information 2011, 2(1), 166-194; https://doi.org/10.3390/info2010166 - 25 Feb 2011
Cited by 33 | Viewed by 7030
Abstract
Genetic specificity information “seen by” the transcriptase is in terms of hydrogen bonded proton states, which initially are metastable amino (–NH2) and, consequently, are subjected to quantum uncertainty limits. This introduces a probability of arrangement, keto-amino → enol-imine, where product [...] Read more.
Genetic specificity information “seen by” the transcriptase is in terms of hydrogen bonded proton states, which initially are metastable amino (–NH2) and, consequently, are subjected to quantum uncertainty limits. This introduces a probability of arrangement, keto-amino → enol-imine, where product protons participate in coupled quantum oscillations at frequencies of ~ 1013 s−1 and are entangled. The enzymatic ket for the four G′-C′ coherent protons is │ψ > = α│+ − + − > + β│+ − − + > + γ│− + + − > + δ│− + − + >. Genetic specificities of superposition states are processed quantum mechanically, in an interval ∆t < < 10−13 s, causing an additional entanglement between coherent protons and transcriptase units. The input qubit at G-C sites causes base substitution, whereas coherent states within A-T sites cause deletion. Initially decohered enol and imine G′ and *C isomers are “entanglement-protected” and participate in Topal-Fresco substitution-replication which, in the 2nd round of growth, reintroduces the metastable keto-amino state. Since experimental lifetimes of metastable keto-amino states at 37 °C are ≥ ~3000 y, approximate quantum methods for small times, t < ~100 y, yield the probability, P(t), of keto-amino → enol-imine as Pρ(t) = ½ (γρ/ħ)2 t2. This approximation introduces a quantum Darwinian evolution model which (a) simulates incidence of cancer data and (b) implies insight into quantum information origins for evolutionary extinction. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

368 KiB  
Article
On Quantifying Semantic Information
by Simon D’Alfonso
Information 2011, 2(1), 61-101; https://doi.org/10.3390/info2010061 - 18 Jan 2011
Cited by 22 | Viewed by 8462
Abstract
The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly [...] Read more.
The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly semantic information. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semantic information. Firstly, a couple of approaches to measure truthlikeness are drawn from the literature and explored, with a focus on their applicability to semantic information quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made. Full article
(This article belongs to the Special Issue What Is Information?)
453 KiB  
Article
Empirical Information Metrics for Prediction Power and Experiment Planning
by Christopher Lee
Information 2011, 2(1), 17-40; https://doi.org/10.3390/info2010017 - 11 Jan 2011
Cited by 19 | Viewed by 8440
Abstract
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of [...] Read more.
In principle, information theory could provide useful metrics for statistical inference. In practice this is impeded by divergent assumptions: Information theory assumes the joint distribution of variables of interest is known, whereas in statistical inference it is hidden and is the goal of inference. To integrate these approaches we note a common theme they share, namely the measurement of prediction power. We generalize this concept as an information metric, subject to several requirements: Calculation of the metric must be objective or model-free; unbiased; convergent; probabilistically bounded; and low in computational complexity. Unfortunately, widely used model selection metrics such as Maximum Likelihood, the Akaike Information Criterion and Bayesian Information Criterion do not necessarily meet all these requirements. We define four distinct empirical information metrics measured via sampling, with explicit Law of Large Numbers convergence guarantees, which meet these requirements: Ie, the empirical information, a measure of average prediction power; Ib, the overfitting bias information, which measures selection bias in the modeling procedure; Ip, the potential information, which measures the total remaining information in the observations not yet discovered by the model; and Im, the model information, which measures the model’s extrapolation prediction power. Finally, we show that Ip + Ie, Ip + Im, and Ie — Im are fixed constants for a given observed dataset (i.e. prediction target), independent of the model, and thus represent a fundamental subdivision of the total information contained in the observations. We discuss the application of these metrics to modeling and experiment planning. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

223 KiB  
Article
Information Operators in Categorical Information Spaces
by Mark Burgin
Information 2010, 1(2), 119-152; https://doi.org/10.3390/info1020119 - 18 Nov 2010
Cited by 57 | Viewed by 6300
Abstract
The general theory of information (GTI) is a synthetic approach, which reveals the essence of information, organizing and encompassing all main directions in information theory. On the methodological level, it is formulated as system of principles explaining what information is and how to [...] Read more.
The general theory of information (GTI) is a synthetic approach, which reveals the essence of information, organizing and encompassing all main directions in information theory. On the methodological level, it is formulated as system of principles explaining what information is and how to measure information. The goal of this paper is the further development of a mathematical stratum of the general theory of information based on category theory. Abstract categories allow us to construct flexible models for information and its flow. Now category theory is also used as unifying framework for physics, biology, topology, and logic, as well as for the whole mathematics, providing a base for analyzing physical and information systems and processes by means of categorical structures and methods. There are two types of representation of information dynamics, i.e., regularities of information processes, in categories: the categorical representation and functorial representation. Here we study the categorical representations of information dynamics, which preserve internal structures of information spaces associated with infological systems as their state/phase spaces. Various relations between information operators are introduced and studied in this paper. These relations describe intrinsic features of information, such as decomposition and complementarity of information, reflecting regularities of information processes. Full article
(This article belongs to the Special Issue What Is Information?)
282 KiB  
Article
Information: A Conceptual Investigation
by Wolfgang Lenski
Information 2010, 1(2), 74-118; https://doi.org/10.3390/info1020074 - 22 Oct 2010
Cited by 95 | Viewed by 8266
Abstract
This paper is devoted to a study of the concept of information. We first situate the concept of information within the context of other philosophical concepts. However, an analysis of the concept of knowledge turns out to be the key when clarifying the [...] Read more.
This paper is devoted to a study of the concept of information. We first situate the concept of information within the context of other philosophical concepts. However, an analysis of the concept of knowledge turns out to be the key when clarifying the concept of information. Our investigations produce the ‘missing link’ for the “severely neglected connection between theories of information and theories of knowledge” (Capurro/Hjørland). The results presented here clarify what information is and have the potential to provide answers to several of Floridi’s “open problems in the philosophy of information”. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Graphical abstract

908 KiB  
Article
A Paradigm Shift in Biology?
by Gennaro Auletta
Information 2010, 1(1), 28-59; https://doi.org/10.3390/info1010028 - 13 Sep 2010
Cited by 126 | Viewed by 8986
Abstract
All new developments in biology deal with the issue of the complexity of organisms, often pointing out the necessity to update our current understanding. However, it is impossible to think about a change of paradigm in biology without introducing new explanatory mechanisms. I [...] Read more.
All new developments in biology deal with the issue of the complexity of organisms, often pointing out the necessity to update our current understanding. However, it is impossible to think about a change of paradigm in biology without introducing new explanatory mechanisms. I shall introduce the mechanisms of teleonomy and teleology as viable explanatory tools. Teleonomy is the ability of organisms to build themselves through internal forces and processes (in the expression of the genetic program) and not external ones, implying a freedom relative to the exterior; however, the organism is able to integrate internal and external constraints in a process of co-adaptation. Teleology is that mechanism through which an organism exercises an informational control on another system in order to establish an equivalence class and select some specific information for its metabolic needs. Finally, I shall examine some interesting processes in phylogeny, ontogeny, and epigeny in which these two mechanisms are involved. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Figure 1

576 KiB  
Article
New Information Measures for the Generalized Normal Distribution
by Christos P. Kitsos and Thomas L. Toulias
Information 2010, 1(1), 13-27; https://doi.org/10.3390/info1010013 - 20 Aug 2010
Cited by 14 | Viewed by 6313
Abstract
We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution, [...] Read more.
We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution, and plays an important role for the introduced generalized information measure. These generalized entropy type measures of information are also evaluated and presented. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Figure 1

Review

Jump to: Research

162 KiB  
Review
Information as a Manifestation of Development
by James A. Coffman
Information 2011, 2(1), 102-116; https://doi.org/10.3390/info2010102 - 21 Jan 2011
Cited by 15 | Viewed by 9246
Abstract
Information manifests a reduction in uncertainty or indeterminacy. As such it can emerge in two ways: by measurement, which involves the intentional choices of an observer; or more generally, by development, which involves systemically mutual (‘self-organizing’) processes that break symmetry. The developmental emergence [...] Read more.
Information manifests a reduction in uncertainty or indeterminacy. As such it can emerge in two ways: by measurement, which involves the intentional choices of an observer; or more generally, by development, which involves systemically mutual (‘self-organizing’) processes that break symmetry. The developmental emergence of information is most obvious in ontogeny, but pertains as well to the evolution of ecosystems and abiotic dissipative structures. In this review, a seminal, well-characterized ontogenetic paradigm—the sea urchin embryo—is used to show how cybernetic causality engenders the developmental emergence of biological information at multiple hierarchical levels of organization. The relevance of information theory to developmental genomics is also discussed. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Graphical abstract

335 KiB  
Review
Application of Information—Theoretic Concepts in Chemoinformatics
by Martin Vogt, Anne Mai Wassermann and Jürgen Bajorath
Information 2010, 1(2), 60-73; https://doi.org/10.3390/info1020060 - 20 Oct 2010
Cited by 141 | Viewed by 7769
Abstract
The use of computational methodologies for chemical database mining and molecular similarity searching or structure-activity relationship analysis has become an integral part of modern chemical and pharmaceutical research. These types of computational studies fall into the chemoinformatics spectrum and usually have large-scale character. [...] Read more.
The use of computational methodologies for chemical database mining and molecular similarity searching or structure-activity relationship analysis has become an integral part of modern chemical and pharmaceutical research. These types of computational studies fall into the chemoinformatics spectrum and usually have large-scale character. Concepts from information theory such as Shannon entropy and Kullback-Leibler divergence have also been adopted for chemoinformatics applications. In this review, we introduce these concepts, describe their adaptations, and discuss exemplary applications of information theory to a variety of relevant problems. These include, among others, chemical feature (or descriptor) selection, database profiling, and compound recall rate predictions. Full article
(This article belongs to the Special Issue What Is Information?)
Show Figures

Graphical abstract

Back to TopTop