Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 14, Issue 10 (October 2012), Pages 1813-2035

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Network Coding for Line Networks with Broadcast Channels
Entropy 2012, 14(10), 1813-1828; doi:10.3390/e14101813
Received: 26 July 2012 / Revised: 7 September 2012 / Accepted: 18 September 2012 / Published: 28 September 2012
PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
An achievable rate region for line networks with edge and node capacity constraints and broadcast channels (BCs) is derived. The region is shown to be the capacity region if the BCs are orthogonal, deterministic, physically degraded, or packet erasure with one-bit feedback. [...] Read more.
An achievable rate region for line networks with edge and node capacity constraints and broadcast channels (BCs) is derived. The region is shown to be the capacity region if the BCs are orthogonal, deterministic, physically degraded, or packet erasure with one-bit feedback. If the BCs are physically degraded with additive Gaussian noise then independent Gaussian inputs achieve capacity. Full article
(This article belongs to the Special Issue Information Theory Applied to Communications and Networking)
Figures

Open AccessArticle On Extracting Probability Distribution Information from Time Series
Entropy 2012, 14(10), 1829-1841; doi:10.3390/e14101829
Received: 15 August 2012 / Revised: 8 September 2012 / Accepted: 21 September 2012 / Published: 28 September 2012
Cited by 7 | PDF Full-text (698 KB) | HTML Full-text | XML Full-text
Abstract
Time-series (TS) are employed in a variety of academic disciplines. In this paper we focus on extracting probability density functions (PDFs) from TS to gain an insight into the underlying dynamic processes. On discussing this “extraction” problem, we consider two popular approaches [...] Read more.
Time-series (TS) are employed in a variety of academic disciplines. In this paper we focus on extracting probability density functions (PDFs) from TS to gain an insight into the underlying dynamic processes. On discussing this “extraction” problem, we consider two popular approaches that we identify as histograms and Bandt–Pompe. We use an information-theoretic method to objectively compare the information content of the concomitant PDFs. Full article
Open AccessArticle Maximum-Entropy Method for Evaluating the Slope Stability of Earth Dams
Entropy 2012, 14(10), 1864-1876; doi:10.3390/e14101864
Received: 11 June 2012 / Revised: 14 September 2012 / Accepted: 27 September 2012 / Published: 2 October 2012
Cited by 2 | PDF Full-text (435 KB) | HTML Full-text | XML Full-text
Abstract
The slope stability is a very important problem in geotechnical engineering. This paper presents an approach for slope reliability analysis based on the maximum-entropy method. The key idea is to implement the maximum entropy principle in estimating the probability density function. The [...] Read more.
The slope stability is a very important problem in geotechnical engineering. This paper presents an approach for slope reliability analysis based on the maximum-entropy method. The key idea is to implement the maximum entropy principle in estimating the probability density function. The performance function is formulated by the Simplified Bishop’s method to estimate the slope failure probability. The maximum-entropy method is used to estimate the probability density function (PDF) of the performance function subject to the moment constraints. A numerical example is calculated and compared to the Monte Carlo simulation (MCS) and the Advanced First Order Second Moment Method (AFOSM). The results show the accuracy and efficiency of the proposed method. The proposed method should be valuable for performing probabilistic analyses. Full article
Open AccessArticle Quantum Theory, Namely the Pure and Reversible Theory of Information
Entropy 2012, 14(10), 1877-1893; doi:10.3390/e14101877
Received: 19 June 2012 / Revised: 20 September 2012 / Accepted: 25 September 2012 / Published: 8 October 2012
Cited by 15 | PDF Full-text (498 KB) | HTML Full-text | XML Full-text
Abstract
After more than a century since its birth, Quantum Theory still eludes our understanding. If asked to describe it, we have to resort to abstract and ad hoc principles about complex Hilbert spaces. How is it possible that a fundamental physical theory [...] Read more.
After more than a century since its birth, Quantum Theory still eludes our understanding. If asked to describe it, we have to resort to abstract and ad hoc principles about complex Hilbert spaces. How is it possible that a fundamental physical theory cannot be described using the ordinary language of Physics? Here we offer a contribution to the problem from the angle of Quantum Information, providing a short non-technical presentation of a recent derivation of Quantum Theory from information-theoretic principles. The broad picture emerging from the principles is that Quantum Theory is the only standard theory of information that is compatible with the purity and reversibility of physical processes. Full article
Open AccessArticle Utilizing the Exergy Concept to Address Environmental Challenges of Electric Systems
Entropy 2012, 14(10), 1894-1914; doi:10.3390/e14101894
Received: 29 August 2012 / Revised: 20 September 2012 / Accepted: 27 September 2012 / Published: 11 October 2012
PDF Full-text (349 KB) | HTML Full-text | XML Full-text
Abstract
Theoretically, the concepts of energy, entropy, exergy and embodied energy are founded in the fields of thermodynamics and physics. Yet, over decades these concepts have been applied in numerous fields of science and engineering, playing a key role in the analysis of [...] Read more.
Theoretically, the concepts of energy, entropy, exergy and embodied energy are founded in the fields of thermodynamics and physics. Yet, over decades these concepts have been applied in numerous fields of science and engineering, playing a key role in the analysis of processes, systems and devices in which energy transfers and energy transformations occur. The research reported here aims to demonstrate, in terms of sustainability, the usefulness of the embodied energy and exergy concepts for analyzing electric devices which convert energy, particularly the electromagnet. This study relies on a dualist view, incorporating technical and environmental dimensions. The information provided by energy assessments is shown to be less useful than that provided by exergy and prone to be misleading. The electromagnet force and torque (representing the driving force of output exergy), accepted as both environmental and technical quantities, are expressed as a function of the electric current and the magnetic field, supporting the view of the necessity of discerning interrelations between science and the environment. This research suggests that a useful step in assessing the viability of electric devices in concert with ecological systems might be to view the magnetic flux density B and the electric current intensity I as environmental parameters. In line with this idea the study encompasses an overview of potential human health risks and effects of extremely low frequency electromagnetic fields (ELF EMFs) caused by the operation of electric systems. It is concluded that exergy has a significant role to play in evaluating and increasing the efficiencies of electrical technologies and systems. This article also aims to demonstrate the need for joint efforts by researchers in electric and environmental engineering, and in medicine and health fields, for enhancing knowledge of the impacts of environmental ELF EMFs on humans and other life forms. Full article
(This article belongs to the Special Issue Exergy: Analysis and Applications)
Open AccessArticle Infrared Cloaking, Stealth, and the Second Law of Thermodynamics
Entropy 2012, 14(10), 1915-1938; doi:10.3390/e14101915
Received: 6 July 2012 / Revised: 8 September 2012 / Accepted: 20 September 2012 / Published: 15 October 2012
Cited by 3 | PDF Full-text (1683 KB) | HTML Full-text | XML Full-text
Abstract
Infrared signature management (IRSM) has been a primary aeronautical concern for over 50 years. Most strategies and technologies are limited by the second law of thermodynamics. In this article, IRSM is considered in light of theoretical developments over the last 15 years [...] Read more.
Infrared signature management (IRSM) has been a primary aeronautical concern for over 50 years. Most strategies and technologies are limited by the second law of thermodynamics. In this article, IRSM is considered in light of theoretical developments over the last 15 years that have put the absolute status of the second law into doubt and that might open the door to a new class of broadband IR stealth and cloaking techniques. Following a brief overview of IRSM and its current thermodynamic limitations, theoretical and experimental challenges to the second law are reviewed. One proposal is treated in detail: a high power density, solid-state power source to convert thermal energy into electrical or chemical energy. Next, second-law based infrared signature management (SL-IRSM) strategies are considered for two representative military scenarios: an underground installation and a SL-based jet engine. It is found that SL-IRSM could be technologically disruptive across the full spectrum of IRSM modalities, including camouflage, surveillance, night vision, target acquisition, tracking, and homing. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Open AccessArticle Programming Unconventional Computers: Dynamics, Development, Self-Reference
Entropy 2012, 14(10), 1939-1952; doi:10.3390/e14101939
Received: 23 August 2012 / Revised: 8 October 2012 / Accepted: 9 October 2012 / Published: 17 October 2012
Cited by 10 | PDF Full-text (514 KB) | HTML Full-text | XML Full-text
Abstract
Classical computing has well-established formalisms for specifying, refining, composing, proving, and otherwise reasoning about computations. These formalisms have matured over the past 70 years or so. Unconventional Computing includes the use of novel kinds of substrates–from black holes and quantum effects, through [...] Read more.
Classical computing has well-established formalisms for specifying, refining, composing, proving, and otherwise reasoning about computations. These formalisms have matured over the past 70 years or so. Unconventional Computing includes the use of novel kinds of substrates–from black holes and quantum effects, through to chemicals, biomolecules, even slime moulds–to perform computations that do not conform to the classical model. Although many of these unconventional substrates can be coerced into performing classical computation, this is not how they “naturally” compute. Our ability to exploit unconventional computing is partly hampered by a lack of corresponding programming formalisms: we need models for building, composing, and reasoning about programs that execute in these substrates. What might, say, a slime mould programming language look like? Here I outline some of the issues and properties of these unconventional substrates that need to be addressed to find “natural” approaches to programming them. Important concepts include embodied real values, processes and dynamical systems, generative systems and their meta-dynamics, and embodied self-reference. Full article
Open AccessArticle Accelerating Universe and the Scalar-Tensor Theory
Entropy 2012, 14(10), 1997-2035; doi:10.3390/e14101997
Received: 18 September 2012 / Revised: 11 October 2012 / Accepted: 15 October 2012 / Published: 19 October 2012
PDF Full-text (587 KB) | HTML Full-text | XML Full-text
Abstract
To understand the accelerating universe discovered observationally in 1998, we develop the scalar-tensor theory of gravitation originally due to Jordan, extended only minimally. The unique role of the conformal transformation and frames is discussed particularly from a physical point of view. We [...] Read more.
To understand the accelerating universe discovered observationally in 1998, we develop the scalar-tensor theory of gravitation originally due to Jordan, extended only minimally. The unique role of the conformal transformation and frames is discussed particularly from a physical point of view. We show the theory to provide us with a simple and natural way of understanding the core of the measurements, Λobs ∼ t0−2 for the observed values of the cosmological constant and today’s age of the universe both expressed in the Planckian units. According to this scenario of a decaying cosmological constant, Λobs is this small only because we are old, not because we fine-tune the parameters. It also follows that the scalar field is simply the pseudo Nambu–Goldstone boson of broken global scale invariance, based on the way astronomers and astrophysicists measure the expansion of the universe in reference to the microscopic length units. A rather phenomenological trapping mechanism is assumed for the scalar field around the epoch of mini-inflation as observed, still maintaining the unmistakable behavior of the scenario stated above. Experimental searches for the scalar field, as light as ∼ 10−9 eV, as part of the dark energy, are also discussed. Full article
(This article belongs to the Special Issue Modified Gravity: From Black Holes Entropy to Current Cosmology)

Review

Jump to: Research

Open AccessReview A Survey on Interference Networks: Interference Alignment and Neutralization
Entropy 2012, 14(10), 1842-1863; doi:10.3390/e14101842
Received: 20 July 2012 / Revised: 6 September 2012 / Accepted: 21 September 2012 / Published: 28 September 2012
Cited by 9 | PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, there has been rapid progress on understanding Gaussian networks with multiple unicast connections, and new coding techniques have emerged. The essence of multi-source networks is how to efficiently manage interference that arises from the transmission of other sessions. Classically, [...] Read more.
In recent years, there has been rapid progress on understanding Gaussian networks with multiple unicast connections, and new coding techniques have emerged. The essence of multi-source networks is how to efficiently manage interference that arises from the transmission of other sessions. Classically, interference is removed by orthogonalization (in time or frequency). This means that the rate per session drops inversely proportional to the number of sessions, suggesting that interference is a strong limiting factor in such networks. However, recently discovered interference management techniques have led to a paradigm shift that interference might not be quite as detrimental after all. The aim of this paper is to provide a review of these new coding techniques as they apply to the case of time-varying Gaussian networks with multiple unicast connections. Specifically, we review interference alignment and ergodic interference alignment for multi-source single-hop networks and interference neutralization and ergodic interference neutralization for multi-source multi-hop networks. We mainly focus on the “degrees of freedom” perspective and also discuss an approximate capacity characterization. Full article
(This article belongs to the Special Issue Information Theory Applied to Communications and Networking)
Figures

Open AccessReview Impaired Sulfate Metabolism and Epigenetics: Is There a Link in Autism?
Entropy 2012, 14(10), 1953-1977; doi:10.3390/e14101953
Received: 28 September 2012 / Revised: 16 October 2012 / Accepted: 16 October 2012 / Published: 18 October 2012
Cited by 7 | PDF Full-text (415 KB) | HTML Full-text | XML Full-text
Abstract
Autism is a brain disorder involving social, memory, and learning deficits, that normally develops prenatally or early in childhood. Frustratingly, many research dollars have as yet failed to identify the cause of autism. While twin concordance studies indicate a strong genetic component, [...] Read more.
Autism is a brain disorder involving social, memory, and learning deficits, that normally develops prenatally or early in childhood. Frustratingly, many research dollars have as yet failed to identify the cause of autism. While twin concordance studies indicate a strong genetic component, the alarming rise in the incidence of autism in the last three decades suggests that environmental factors play a key role as well. This dichotomy can be easily explained if we invoke a heritable epigenetic effect as the primary factor. Researchers are just beginning to realize the huge significance of epigenetic effects taking place during gestation in influencing the phenotypical expression. Here, we propose the novel hypothesis that sulfates deficiency in both the mother and the child, brought on mainly by excess exposure to environmental toxins and inadequate sunlight exposure to the skin, leads to widespread hypomethylation in the fetal brain with devastating consequences. We show that many seemingly disparate observations regarding serum markers, neuronal pathologies, and nutritional deficiencies associated with autism can be integrated to support our hypothesis. Full article
(This article belongs to the Special Issue Biosemiotic Entropy: Disorder, Disease, and Mortality)
Open AccessReview Conformal Relativity versus Brans–Dicke and Superstring Theories
Entropy 2012, 14(10), 1978-1996; doi:10.3390/e14101978
Received: 20 August 2012 / Revised: 23 September 2012 / Accepted: 27 September 2012 / Published: 18 October 2012
Cited by 3 | PDF Full-text (316 KB) | HTML Full-text | XML Full-text
Abstract
We show how conformal relativity is related to Brans–Dicke theory and to low-energy-effective superstring theory. Conformal relativity or the Hoyle–Narlikar theory is invariant with respect to conformal transformations of the metric. We show that the conformal relativity action is equivalent to the [...] Read more.
We show how conformal relativity is related to Brans–Dicke theory and to low-energy-effective superstring theory. Conformal relativity or the Hoyle–Narlikar theory is invariant with respect to conformal transformations of the metric. We show that the conformal relativity action is equivalent to the transformed Brans–Dicke action for ω = -3/2 (which is the border between standard scalar field and ghost) in contrast to the reduced (graviton-dilaton) low-energy-effective superstring action which corresponds to the Brans–Dicke action with ω = -1. We show that like in ekpyrotic/cyclic models, the transition through the singularity in conformal cosmology in the string frame takes place in the weak coupling regime. We also find interesting self-duality and duality relations for the graviton-dilaton actions. Full article
(This article belongs to the Special Issue Modified Gravity: From Black Holes Entropy to Current Cosmology)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top