Next Issue
Volume 19, August
Previous Issue
Volume 19, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 19, Issue 7 (July 2017) – 87 articles

Cover Story (view full-size image): What happens when information is processed by applying a function f to a variable S? The data processing inequality says that Shannon's mutual information always decreases. But what about other aspects of information, such as redundant information, unique information or synergistic information? This question has been discussed a lot in the field of information decompositions. We present a construction that enforces such a monotonicity property on an information measure. We show that the Blackwell property, which gives an operational meaning to unique information, is incompatible with monotonicity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
2085 KiB  
Article
Pretreatment and Wavelength Selection Method for Near-Infrared Spectra Signal Based on Improved CEEMDAN Energy Entropy and Permutation Entropy
by Xiaoli Li and Chengwei Li
Entropy 2017, 19(7), 380; https://doi.org/10.3390/e19070380 - 24 Jul 2017
Cited by 11 | Viewed by 5846
Abstract
The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used [...] Read more.
The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used to propose a new method for pretreatment and wavelength selection of near-infrared spectra signal. The near-infrared spectra of glucose solution was used as the research object, the improved CEEMDAN energy entropy was then used to reconstruct spectral data for removing noise, and the useful wavelengths are selected based on PE after spectra segmentation. Firstly, the intrinsic mode functions of original spectra are obtained by improved CEEMDAN algorithm. The useful signal modes and noisy signal modes were then identified by the energy entropy, and the reconstructed spectral signal is the sum of useful signal modes. Finally, the reconstructed spectra were segmented and the wavelengths with abundant glucose information were selected based on PE. To evaluate the performance of the proposed method, support vector regression and partial least square regression were used to build the calibration model using the wavelengths selected by the new method, mutual information, successive projection algorithm, principal component analysis, and full spectra data. The results of the model were evaluated by the correlation coefficient and root mean square error of prediction. The experimental results showed that the improved CEEMDAN energy entropy can effectively reconstruct near-infrared spectra signal and that the PE can effectively solve the wavelength selection. Therefore, the proposed method can improve the precision of spectral analysis and the stability of the model for near-infrared spectra analysis. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Show Figures

Figure 1

550 KiB  
Article
An Application of Pontryagin’s Principle to Brownian Particle Engineered Equilibration
by Paolo Muratore-Ginanneschi and Kay Schwieger
Entropy 2017, 19(7), 379; https://doi.org/10.3390/e19070379 - 24 Jul 2017
Cited by 10 | Viewed by 3939
Abstract
We present a stylized model of controlled equilibration of a small system in a fluctuating environment. We derive the optimal control equations steering in finite-time the system between two equilibrium states. The corresponding thermodynamic transition is optimal in the sense that it occurs [...] Read more.
We present a stylized model of controlled equilibration of a small system in a fluctuating environment. We derive the optimal control equations steering in finite-time the system between two equilibrium states. The corresponding thermodynamic transition is optimal in the sense that it occurs at minimum entropy if the set of admissible controls is restricted by certain bounds on the time derivatives of the protocols. We apply our equations to the engineered equilibration of an optical trap considered in a recent proof of principle experiment. We also analyze an elementary model of nucleation previously considered by Landauer to discuss the thermodynamic cost of one bit of information erasure. We expect our model to be a useful benchmark for experiment design as it exhibits the same integrability properties of well-known models of optimal mass transport by a compressible velocity field. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Show Figures

Figure 1

509 KiB  
Article
Cognition and Cooperation in Interfered Multiple Access Channels
by Jonathan Shimonovich, Anelia Somekh-Baruch and Shlomo Shamai (Shitz)
Entropy 2017, 19(7), 378; https://doi.org/10.3390/e19070378 - 24 Jul 2017
Cited by 3 | Viewed by 3279
Abstract
In this work, we investigate a three-user cognitive communication network where a primary two-user multiple access channel suffers interference from a secondary point-to-point channel, sharing the same medium. While the point-to-point channel transmitter—transmitter 3—causes an interference at the primary multiple access channel receiver, [...] Read more.
In this work, we investigate a three-user cognitive communication network where a primary two-user multiple access channel suffers interference from a secondary point-to-point channel, sharing the same medium. While the point-to-point channel transmitter—transmitter 3—causes an interference at the primary multiple access channel receiver, we assume that the primary channel transmitters—transmitters 1 and 2—do not cause any interference at the point-to-point receiver. It is assumed that one of the multiple access channel transmitters has cognitive capabilities and cribs causally from the other multiple access channel transmitter. Furthermore, we assume that the cognitive transmitter knows the message of transmitter 3 in a non-causal manner, thus introducing the three-user multiple access cognitive Z-interference channel. We obtain inner and outer bounds on the capacity region of the this channel for both causal and strictly causal cribbing cognitive encoders. We further investigate different variations and aspects of the channel, referring to some previously studied cases. Attempting to better characterize the capacity region we look at the vertex points of the capacity region where each one of the transmitters tries to achieve its maximal rate. Moreover, we find the capacity region of a special case of a certain kind of more-capable multiple access cognitive Z-interference channels. In addition, we study the case of full unidirectional cooperation between the 2 multiple access channel encoders. Finally, since direct cribbing allows us full cognition in the case of continuous input alphabets, we study the case of partial cribbing, i.e., when the cribbing is performed via a deterministic function. Full article
(This article belongs to the Special Issue Network Information Theory)
Show Figures

Figure 1

363 KiB  
Article
Investigation of Oriented Magnetic Field Effects on Entropy Generation in an Inclined Channel Filled with Ferrofluids
by Elgiz Baskaya, Guven Komurgoz and Ibrahim Ozkol
Entropy 2017, 19(7), 377; https://doi.org/10.3390/e19070377 - 23 Jul 2017
Cited by 12 | Viewed by 4164
Abstract
Dispersion of super-paramagnetic nanoparticles in nonmagnetic carrier fluids, known as ferrofluids, offers the advantages of tunable thermo-physical properties and eliminate the need for moving parts to induce flow. This study investigates ferrofluid flow characteristics in an inclined channel under inclined magnetic field and [...] Read more.
Dispersion of super-paramagnetic nanoparticles in nonmagnetic carrier fluids, known as ferrofluids, offers the advantages of tunable thermo-physical properties and eliminate the need for moving parts to induce flow. This study investigates ferrofluid flow characteristics in an inclined channel under inclined magnetic field and constant pressure gradient. The ferrofluid considered in this work is comprised of Cu particles as the nanoparticles and water as the base fluid. The governing differential equations including viscous dissipation are non-dimensionalised and discretized with Generalized Differential Quadrature Method. The resulting algebraic set of equations are solved via Newton-Raphson Method. The work done here contributes to the literature by searching the effects of magnetic field angle and channel inclination separately on the entropy generation of the ferrofluid filled inclined channel system in order to achieve best design parameter values so called entropy generation minimization is implemented. Furthermore, the effect of magnetic field, inclination angle of the channel and volume fraction of nanoparticles on velocity and temperature profiles are examined and represented by figures to give a thorough understanding of the system behavior. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Show Figures

Figure 1

1978 KiB  
Article
A Novel Numerical Approach for a Nonlinear Fractional Dynamical Model of Interpersonal and Romantic Relationships
by Jagdev Singh, Devendra Kumar, Maysaa Al Qurashi and Dumitru Baleanu
Entropy 2017, 19(7), 375; https://doi.org/10.3390/e19070375 - 22 Jul 2017
Cited by 47 | Viewed by 4931
Abstract
In this paper, we propose a new numerical algorithm, namely q-homotopy analysis Sumudu transform method (q-HASTM), to obtain the approximate solution for the nonlinear fractional dynamical model of interpersonal and romantic relationships. The suggested algorithm examines the dynamics of love [...] Read more.
In this paper, we propose a new numerical algorithm, namely q-homotopy analysis Sumudu transform method (q-HASTM), to obtain the approximate solution for the nonlinear fractional dynamical model of interpersonal and romantic relationships. The suggested algorithm examines the dynamics of love affairs between couples. The q-HASTM is a creative combination of Sumudu transform technique, q-homotopy analysis method and homotopy polynomials that makes the calculation very easy. To compare the results obtained by using q-HASTM, we solve the same nonlinear problem by Adomian’s decomposition method (ADM). The convergence of the q-HASTM series solution for the model is adapted and controlled by auxiliary parameter ℏ and asymptotic parameter n. The numerical results are demonstrated graphically and in tabular form. The result obtained by employing the proposed scheme reveals that the approach is very accurate, effective, flexible, simple to apply and computationally very nice. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Show Figures

Figure 1

2462 KiB  
Article
Comparative Analysis of Jüttner’s Calculation of the Energy of a Relativistic Ideal Gas and Implications for Accelerator Physics and Cosmology
by John R. Fanchi
Entropy 2017, 19(7), 374; https://doi.org/10.3390/e19070374 - 22 Jul 2017
Cited by 3 | Viewed by 3744
Abstract
Jüttner used the conventional theory of relativistic statistical mechanics to calculate the energy of a relativistic ideal gas in 1911. An alternative derivation of the energy of a relativistic ideal gas was published by Horwitz, Schieve and Piron in 1981 within the context [...] Read more.
Jüttner used the conventional theory of relativistic statistical mechanics to calculate the energy of a relativistic ideal gas in 1911. An alternative derivation of the energy of a relativistic ideal gas was published by Horwitz, Schieve and Piron in 1981 within the context of parametrized relativistic statistical mechanics. The resulting energy in the ultrarelativistic regime differs from Jüttner’s result. We review the derivations of energy and identify physical regimes for testing the validity of the two theories in accelerator physics and cosmology. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Show Figures

Figure 1

1403 KiB  
Article
Objective Weights Based on Ordered Fuzzy Numbers for Fuzzy Multiple Criteria Decision-Making Methods
by Dariusz Kacprzak
Entropy 2017, 19(7), 373; https://doi.org/10.3390/e19070373 - 21 Jul 2017
Cited by 23 | Viewed by 4241
Abstract
Fuzzy multiple criteria decision-making (FMCDM) methods are techniques of finding the trade-off option out of all feasible alternatives that are characterized by multiple criteria and where data cannot be measured precisely, but can be represented, for instance, by ordered fuzzy numbers (OFNs). One [...] Read more.
Fuzzy multiple criteria decision-making (FMCDM) methods are techniques of finding the trade-off option out of all feasible alternatives that are characterized by multiple criteria and where data cannot be measured precisely, but can be represented, for instance, by ordered fuzzy numbers (OFNs). One of the main steps in FMCDM methods consist in finding the appropriate criteria weights. A method based on the concept of Shannon entropy is one of many techniques for the determination of criteria weights when obtaining them from the decision-maker is not possible. The goal of the paper is to extend the notion of Shannon entropy to fuzzy data represented by OFNs. The proposed approach allows to obtain criteria weights as OFNs, which are normalized and sum to 1. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

1296 KiB  
Article
Transfer Entropy for Nonparametric Granger Causality Detection: An Evaluation of Different Resampling Methods
by Cees Diks and Hao Fang
Entropy 2017, 19(7), 372; https://doi.org/10.3390/e19070372 - 21 Jul 2017
Cited by 10 | Viewed by 5908
Abstract
The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests are typically based [...] Read more.
The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests are typically based on nonparametric entropy estimates for which the development of formal asymptotic theory turns out to be challenging. In this paper, we provide numerical comparisons for simulation-based tests to gain some insights into the statistical behavior of nonparametric transfer entropy-based tests. In particular, surrogate algorithms and smoothed bootstrap procedures are described and compared. We conclude this paper with a financial application to the detection of spillover effects in the global equity market. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Show Figures

Figure 1

400 KiB  
Article
A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique
by Xiong Luo, Jing Deng, Weiping Wang, Jenq-Haur Wang and Wenbing Zhao
Entropy 2017, 19(7), 365; https://doi.org/10.3390/e19070365 - 20 Jul 2017
Cited by 41 | Viewed by 6131
Abstract
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been [...] Read more.
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

4148 KiB  
Article
Collaborative Service Selection via Ensemble Learning in Mixed Mobile Network Environments
by Yuyu Yin, Yueshen Xu, Wenting Xu, Min Gao, Lifeng Yu and Yujie Pei
Entropy 2017, 19(7), 358; https://doi.org/10.3390/e19070358 - 20 Jul 2017
Cited by 75 | Viewed by 4543
Abstract
Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of [...] Read more.
Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of defects, such as failing to handle data sparsity. In mobile network environments, the abnormal QoS data are likely to result in inferior prediction accuracy. Unfortunately, these problems have not attracted enough attention, especially in a mixed mobile network environment with different network configurations, generations, or types. An ensemble learning method for predicting missing QoS in 5G network environments is proposed in this paper. There are two key principles: one is the newly proposed similarity computation method for identifying similar neighbors; the other is the extended ensemble learning model for discovering and filtering fake neighbors from the preliminary neighbors set. Moreover, three prediction models are also proposed, two individual models and one combination model. They are used for utilizing the user similar neighbors and servicing similar neighbors, respectively. Experimental results conducted in two real-world datasets show our approaches can produce superior prediction accuracy. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Show Figures

Figure 1

850 KiB  
Article
Conformity, Anticonformity and Polarization of Opinions: Insights from a Mathematical Model of Opinion Dynamics
by Tyll Krueger, Janusz Szwabiński and Tomasz Weron
Entropy 2017, 19(7), 371; https://doi.org/10.3390/e19070371 - 19 Jul 2017
Cited by 42 | Viewed by 7350
Abstract
Understanding and quantifying polarization in social systems is important because of many reasons. It could for instance help to avoid segregation and conflicts in the society or to control polarized debates and predict their outcomes. In this paper, we present a version of [...] Read more.
Understanding and quantifying polarization in social systems is important because of many reasons. It could for instance help to avoid segregation and conflicts in the society or to control polarized debates and predict their outcomes. In this paper, we present a version of the q-voter model of opinion dynamics with two types of responses to social influence: conformity (like in the original q-voter model) and anticonformity. We put the model on a social network with the double-clique topology in order to check how the interplay between those responses impacts the opinion dynamics in a population divided into two antagonistic segments. The model is analyzed analytically, numerically and by means of Monte Carlo simulations. Our results show that the system undergoes two bifurcations as the number of cross-links between cliques changes. Below the first critical point, consensus in the entire system is possible. Thus, two antagonistic cliques may share the same opinion only if they are loosely connected. Above that point, the system ends up in a polarized state. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Show Figures

Figure 1

1116 KiB  
Article
Optimal Detection under the Restricted Bayesian Criterion
by Shujun Liu, Ting Yang and Hongqing Liu
Entropy 2017, 19(7), 370; https://doi.org/10.3390/e19070370 - 19 Jul 2017
Cited by 17 | Viewed by 3890
Abstract
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater [...] Read more.
This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

1312 KiB  
Perspective
The History and Perspectives of Efficiency at Maximum Power of the Carnot Engine
by Michel Feidt
Entropy 2017, 19(7), 369; https://doi.org/10.3390/e19070369 - 19 Jul 2017
Cited by 38 | Viewed by 5009
Abstract
Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and [...] Read more.
Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and Novikov (1957). The paper proposes a careful examination of the similarities and differences between these pioneering works and the consequences they had on the works that followed. The modelling of the Carnot engine was carried out in three steps, namely (1) modelling with time durations of the isothermal processes, as done by Curzon and Ahlborn; (2) modelling at a steady-state operation regime for which the time does not appear explicitly; and (3) modelling of transient conditions which requires the time to appear explicitly. Whatever the method of modelling used, the subsequent optimization appears to be related to specific physical dimensions. The main goal of the methodology is to choose the objective function, which here is the power, and to define the associated constraints. We propose a specific approach, focusing on the main functions that respond to engineering requirements. The study of the Carnot engine illustrates the synthesis carried out and proves that the primary interest for an engineer is mainly connected to what we called Finite (physical) Dimensions Optimal Thermodynamics, including time in the case of transient modelling. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

660 KiB  
Article
Nonequilibrium Entropy in a Shock
by L.G. Margolin
Entropy 2017, 19(7), 368; https://doi.org/10.3390/e19070368 - 19 Jul 2017
Cited by 15 | Viewed by 4481
Abstract
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of [...] Read more.
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies the Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Show Figures

Figure 1

515 KiB  
Article
Reliable Approximation of Long Relaxation Timescales in Molecular Dynamics
by Wei Zhang and Christof Schütte
Entropy 2017, 19(7), 367; https://doi.org/10.3390/e19070367 - 18 Jul 2017
Cited by 10 | Viewed by 5277
Abstract
Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based [...] Read more.
Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Show Figures

Figure 1

397 KiB  
Article
Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis
by Seyyed Mohammadreza Azimi, Osvaldo Simeone and Ravi Tandon
Entropy 2017, 19(7), 366; https://doi.org/10.3390/e19070366 - 18 Jul 2017
Cited by 8 | Viewed by 3841
Abstract
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is [...] Read more.
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results. Full article
(This article belongs to the Special Issue Network Information Theory)
Show Figures

Figure 1

1090 KiB  
Article
Testing the Interacting Dark Energy Model with Cosmic Microwave Background Anisotropy and Observational Hubble Data
by Weiqiang Yang, Lixin Xu, Hang Li, Yabo Wu and Jianbo Lu
Entropy 2017, 19(7), 327; https://doi.org/10.3390/e19070327 - 17 Jul 2017
Cited by 8 | Viewed by 4817
Abstract
The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation [...] Read more.
The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation of state of dark energy. The interaction rate between dark energy and dark matter was a constant parameter, which was, Q = 3 H ξ ( 1 + w x ) ρ x . Based on the Markov chain Monte Carlo method, we made a global fitting on the interacting dark energy model from Planck 2015 cosmic microwave background anisotropy and observational Hubble data. We found that the observational data sets slightly favored a small interaction rate between dark energy and dark matter; however, there was not obvious evidence of interaction at the 1 σ level. Full article
(This article belongs to the Special Issue Dark Energy)
Show Figures

Figure 1

846 KiB  
Article
Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel
by Silas L. Fong and Vincent Y. F. Tan
Entropy 2017, 19(7), 364; https://doi.org/10.3390/e19070364 - 15 Jul 2017
Cited by 6 | Viewed by 3453
Abstract
This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity I ( q Y | X ) characterizes the closest gap between the [...] Read more.
This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity I ( q Y | X ) characterizes the closest gap between the capacity and non-asymptotic achievable rates as follows: For a fixed ε ( 0 , 1 ) , the gap between the capacity I ( q Y | X ) and the maximum non-asymptotic rate R n * achieved by a length-n polar code with average error probability ε scales as n - 1 / μ , i.e., I ( q Y | X ) - R n * = Θ ( n - 1 / μ ) . It is well known that the scaling exponent μ for any binary-input memoryless channel (BMC) with I ( q Y | X ) ( 0 , 1 ) is bounded above by 4 . 714 . Our main result shows that 4 . 714 remains a valid upper bound on the scaling exponent for the AWGN channel. Our proof technique involves the following two ideas: (i) The capacity of the AWGN channel can be achieved within a gap of O ( n - 1 / μ log n ) by using an input alphabet consisting of n constellations and restricting the input distribution to be uniform; (ii) The capacity of a multiple access channel (MAC) with an input alphabet consisting of n constellations can be achieved within a gap of O ( n - 1 / μ log n ) by using a superposition of log n binary-input polar codes. In addition, we investigate the performance of polar codes in the moderate deviations regime where both the gap to capacity and the error probability vanish as n grows. An explicit construction of polar codes is proposed to obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
1311 KiB  
Article
Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies
by Andrey Garnaev and Wade Trappe
Entropy 2017, 19(7), 363; https://doi.org/10.3390/e19070363 - 15 Jul 2017
Cited by 9 | Viewed by 4273
Abstract
Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless [...] Read more.
Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless systems. These servers receive information regarding the needs of each system, and then provide instructions back to each system regarding the spectrum bands they may use. This sharing of information is complicated by the fact that these systems are often in competition with each other: each system desires to use as much of the spectrum as possible to support its users, and each system could learn and harm the bands of the other system. Three problems arise in such a spectrum-sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. Since this problem can arise for a variety of wireless systems, we present an abstract formulation in which the agents or spectrum server introduces obfuscation in the resource assignment to maintain reliability. We derive a closed form expression for the expected damage that can arise from possible malicious activity, and using this formula we find a tradeoff between the amount of extra decoys that must be used in order to support higher communication fidelity against potential interference, and the cost of maintaining this reliability. Then, we examine a scenario where a smart adversary may also use obfuscation itself, and formulate the scenario as a signaling game, which can be solved by applying a classical iterative forward-induction algorithm. For an important particular case, the game is solved in a closed form, which gives conditions for deciding whether an agent can be trusted, or whether its request should be inspected and how intensely it should be inspected. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Show Figures

Figure 1

450 KiB  
Article
A Survey on Robust Interference Management in Wireless Networks
by Jaber Kakar and Aydin Sezgin
Entropy 2017, 19(7), 362; https://doi.org/10.3390/e19070362 - 14 Jul 2017
Cited by 23 | Viewed by 5703
Abstract
Recent advances in the characterization of fundamental limits on interference management in wireless networks and the discovery of new communication schemes on how to handle interference led to a better understanding towards the capacity of such networks. The benefits in terms of achievable [...] Read more.
Recent advances in the characterization of fundamental limits on interference management in wireless networks and the discovery of new communication schemes on how to handle interference led to a better understanding towards the capacity of such networks. The benefits in terms of achievable rates of powerful schemes handling interference, such as interference alignment, are substantial. However, the main issue behind most of these results is the assumption of perfect channel state information at the transmitters (CSIT). In the absence of channel knowledge the performance of various interference networks collapses to what is achievable by time division multiple access (TDMA). Robustinterference management techniques are promising solutions to maintain high achievable rates at various levels of CSIT, ranging from delayed to imperfect CSIT. In this survey, we outline and study two main research perspectives of how to robustly handle interference for cases where CSIT is imprecise on examples for non-distributed and distributed networks, namely broadcast and X-channel. To quantify the performance of these schemes, we use the well-known (generalized) degrees of freedom (GDoF) metric as the pre-log factor of achievable rates. These perspectives maintain the capacity benefits at similar levels as for perfect channel knowledge. These two perspectives are: First,scheme-adaptationthat explicitly accounts for the level of channel knowledge and, second,relay-aided infrastructure enlargementto decrease channel knowledge dependency. The relaxation on CSIT requirements through these perspectives will ultimately lead to practical realizations of robust interference management techniques. The survey concludes with a discussion of open problems. Full article
(This article belongs to the Special Issue Network Information Theory)
Show Figures

Figure 1

325 KiB  
Article
Estimating Mixture Entropy with Pairwise Distances
by Artemy Kolchinsky and Brendan D. Tracey
Entropy 2017, 19(7), 361; https://doi.org/10.3390/e19070361 - 14 Jul 2017
Cited by 69 | Viewed by 8071 | Correction
Abstract
Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of [...] Read more.
Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff α -divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

1349 KiB  
Article
Extracting Knowledge from the Geometric Shape of Social Network Data Using Topological Data Analysis
by Khaled Almgren, Minkyu Kim and Jeongkyu Lee
Entropy 2017, 19(7), 360; https://doi.org/10.3390/e19070360 - 14 Jul 2017
Cited by 13 | Viewed by 7077
Abstract
Topological data analysis is a noble approach to extract meaningful information from high-dimensional data and is robust to noise. It is based on topology, which aims to study the geometric shape of data. In order to apply topological data analysis, an algorithm called [...] Read more.
Topological data analysis is a noble approach to extract meaningful information from high-dimensional data and is robust to noise. It is based on topology, which aims to study the geometric shape of data. In order to apply topological data analysis, an algorithm called mapper is adopted. The output from mapper is a simplicial complex that represents a set of connected clusters of data points. In this paper, we explore the feasibility of topological data analysis for mining social network data by addressing the problem of image popularity. We randomly crawl images from Instagram and analyze the effects of social context and image content on an image’s popularity using mapper. Mapper clusters the images using each feature, and the ratio of popularity in each cluster is computed to determine the clusters with a high or low possibility of popularity. Then, the popularity of images are predicted to evaluate the accuracy of topological data analysis. This approach is further compared with traditional clustering algorithms, including k-means and hierarchical clustering, in terms of accuracy, and the results show that topological data analysis outperforms the others. Moreover, topological data analysis provides meaningful information based on the connectivity between the clusters. Full article
(This article belongs to the Special Issue Information Geometry II)
Show Figures

Figure 1

6252 KiB  
Article
Non-Linear Stability Analysis of Real Signals from Nuclear Power Plants (Boiling Water Reactors) Based on Noise Assisted Empirical Mode Decomposition Variants and the Shannon Entropy
by Omar Alejandro Olvera-Guerrero, Alfonso Prieto-Guerrero and Gilberto Espinosa-Paredes
Entropy 2017, 19(7), 359; https://doi.org/10.3390/e19070359 - 14 Jul 2017
Cited by 8 | Viewed by 7232
Abstract
There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical [...] Read more.
There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical systems that may even exhibit chaotic dynamics that normally preclude the use of the DR when the BWR is working at a specific operating point during instability. In this work a novel methodology based on an adaptive Shannon Entropy estimator and on Noise Assisted Empirical Mode Decomposition variants is presented. This methodology was developed for real-time implementation of a stability monitor. This methodology was applied to a set of signals stemming from several NPPs reactors (Ringhals-Sweden, Forsmark-Sweden and Laguna Verde-Mexico) under commercial operating conditions, that experienced instabilities events, each one of a different nature. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

365 KiB  
Article
Cosmological Time, Entropy and Infinity
by Clémentine Hauret, Pierre Magain and Judith Biernaux
Entropy 2017, 19(7), 357; https://doi.org/10.3390/e19070357 - 14 Jul 2017
Cited by 9 | Viewed by 5984
Abstract
Time is a parameter playing a central role in our most fundamental modelling of natural laws. Relativity theory shows that the comparison of times measured by different clocks depends on their relative motion and on the strength of the gravitational field in which [...] Read more.
Time is a parameter playing a central role in our most fundamental modelling of natural laws. Relativity theory shows that the comparison of times measured by different clocks depends on their relative motion and on the strength of the gravitational field in which they are embedded. In standard cosmology, the time parameter is the one measured by fundamental clocks (i.e., clocks at rest with respect to the expanding space). This proper time is assumed to flow at a constant rate throughout the whole history of the universe. We make the alternative hypothesis that the rate at which the cosmological time flows depends on the dynamical state of the universe. In thermodynamics, the arrow of time is strongly related to the second law, which states that the entropy of an isolated system will always increase with time or, at best, stay constant. Hence, we assume that the time measured by fundamental clocks is proportional to the entropy of the region of the universe that is causally connected to them. Under that simple assumption, we find it possible to build toy cosmological models that present an acceleration of their expansion without any need for dark energy while being spatially closed and finite, avoiding the need to deal with infinite values. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Show Figures

Figure 1

269 KiB  
Article
Clausius Relation for Active Particles: What Can We Learn from Fluctuations
by Andrea Puglisi and Umberto Marini Bettolo Marconi
Entropy 2017, 19(7), 356; https://doi.org/10.3390/e19070356 - 13 Jul 2017
Cited by 44 | Viewed by 4575
Abstract
Many kinds of active particles, such as bacteria or active colloids, move in a thermostatted fluid by means of self-propulsion. Energy injected by such a non-equilibrium force is eventually dissipated as heat in the thermostat. Since thermal fluctuations are much faster and weaker [...] Read more.
Many kinds of active particles, such as bacteria or active colloids, move in a thermostatted fluid by means of self-propulsion. Energy injected by such a non-equilibrium force is eventually dissipated as heat in the thermostat. Since thermal fluctuations are much faster and weaker than self-propulsion forces, they are often neglected, blurring the identification of dissipated heat in theoretical models. For the same reason, some freedom—or arbitrariness—appears when defining entropy production. Recently three different recipes to define heat and entropy production have been proposed for the same model where the role of self-propulsion is played by a Gaussian coloured noise. Here we compare and discuss the relation between such proposals and their physical meaning. One of these proposals takes into account the heat exchanged with a non-equilibrium active bath: such an “active heat” satisfies the original Clausius relation and can be experimentally verified. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
390 KiB  
Article
Some Remarks on Classical and Classical-Quantum Sphere Packing Bounds: Rényi vs. Kullback–Leibler
by Marco Dalai
Entropy 2017, 19(7), 355; https://doi.org/10.3390/e19070355 - 12 Jul 2017
Cited by 8 | Viewed by 3449
Abstract
We review the use of binary hypothesis testing for the derivation of the sphere packing bound in channel coding, pointing out a key difference between the classical and the classical-quantum setting. In the first case, two ways of using the binary hypothesis testing [...] Read more.
We review the use of binary hypothesis testing for the derivation of the sphere packing bound in channel coding, pointing out a key difference between the classical and the classical-quantum setting. In the first case, two ways of using the binary hypothesis testing are known, which lead to the same bound written in different analytical expressions. The first method historically compares output distributions induced by the codewords with an auxiliary fixed output distribution, and naturally leads to an expression using the Renyi divergence. The second method compares the given channel with an auxiliary one and leads to an expression using the Kullback–Leibler divergence. In the classical-quantum case, due to a fundamental difference in the quantum binary hypothesis testing, these two approaches lead to two different bounds, the first being the “right” one. We discuss the details of this phenomenon, which suggests the question of whether auxiliary channels are used in the optimal way in the second approach and whether recent results on the exact strong-converse exponent in classical-quantum channel coding might play a role in the considered problem. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

2017 KiB  
Article
Study on the Business Cycle Model with Fractional-Order Time Delay under Random Excitation
by Zifei Lin, Wei Xu, Jiaorui Li, Wantao Jia and Shuang Li
Entropy 2017, 19(7), 354; https://doi.org/10.3390/e19070354 - 12 Jul 2017
Cited by 4 | Viewed by 4330
Abstract
Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method [...] Read more.
Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method is applied to obtain the approximate analytical solution. Numerical simulations are done to verify the method. The effects of the fractional order, time delay, economic control and random excitation on the amplitude of the economy system are investigated. The results show that time delay, fractional order and intensity of random excitation can all magnify the amplitude and increase the volatility of the economy system. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Show Figures

Figure 1

508 KiB  
Article
On Unsteady Three-Dimensional Axisymmetric MHD Nanofluid Flow with Entropy Generation and Thermo-Diffusion Effects on a Non-Linear Stretching Sheet
by Mohammed Almakki, Sharadia Dey, Sabyasachi Mondal and Precious Sibanda
Entropy 2017, 19(7), 168; https://doi.org/10.3390/e19070168 - 12 Jul 2017
Cited by 17 | Viewed by 5284
Abstract
The entropy generation in unsteady three-dimensional axisymmetric magnetohydrodynamics (MHD) nanofluid flow over a non-linearly stretching sheet is investigated. The flow is subject to thermal radiation and a chemical reaction. The conservation equations are solved using the spectral quasi-linearization method. The novelty of the [...] Read more.
The entropy generation in unsteady three-dimensional axisymmetric magnetohydrodynamics (MHD) nanofluid flow over a non-linearly stretching sheet is investigated. The flow is subject to thermal radiation and a chemical reaction. The conservation equations are solved using the spectral quasi-linearization method. The novelty of the work is in the study of entropy generation in three-dimensional axisymmetric MHD nanofluid and the choice of the spectral quasi-linearization method as the solution method. The effects of Brownian motion and thermophoresis are also taken into account. The nanofluid particle volume fraction on the boundary is passively controlled. The results show that as the Hartmann number increases, both the Nusselt number and the Sherwood number decrease, whereas the skin friction increases. It is further shown that an increase in the thermal radiation parameter corresponds to a decrease in the Nusselt number. Moreover, entropy generation increases with respect to some physical parameters. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Show Figures

Figure 1

324 KiB  
Article
Discrete Wigner Function Derivation of the Aaronson–Gottesman Tableau Algorithm
by Lucas Kocia, Yifei Huang and Peter Love
Entropy 2017, 19(7), 353; https://doi.org/10.3390/e19070353 - 11 Jul 2017
Cited by 9 | Viewed by 4414
Abstract
The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a [...] Read more.
The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson–Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson–Gottesman algorithm for qubits are likely due only to the fact that the Weyl–Heisenberg group is not in S U ( d ) for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Show Figures

Figure 1

445 KiB  
Article
Chaos Synchronization of Nonlinear Fractional Discrete Dynamical Systems via Linear Control
by Baogui Xin, Li Liu, Guisheng Hou and Yuan Ma
Entropy 2017, 19(7), 351; https://doi.org/10.3390/e19070351 - 11 Jul 2017
Cited by 29 | Viewed by 5246
Abstract
By using a linear feedback control technique, we propose a chaos synchronization scheme for nonlinear fractional discrete dynamical systems. Then, we construct a novel 1-D fractional discrete income change system and a kind of novel 3-D fractional discrete system. By means of the [...] Read more.
By using a linear feedback control technique, we propose a chaos synchronization scheme for nonlinear fractional discrete dynamical systems. Then, we construct a novel 1-D fractional discrete income change system and a kind of novel 3-D fractional discrete system. By means of the stability principles of Caputo-like fractional discrete systems, we lastly design a controller to achieve chaos synchronization, and present some numerical simulations to illustrate and validate the synchronization scheme. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop