Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 2 (February 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Darwinian evolution is grounded in a dynamical selection process that involves diverse classes of [...] Read more.
View options order results:
result details:
Displaying articles 1-69
Export citation of selected articles as:

Research

Open AccessArticle Information Theoretic-Based Interpretation of a Deep Neural Network Approach in Diagnosing Psychogenic Non-Epileptic Seizures
Entropy 2018, 20(2), 43; doi:10.3390/e20020043
Received: 28 November 2017 / Revised: 12 January 2018 / Accepted: 19 January 2018 / Published: 23 January 2018
PDF Full-text (2625 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The use of a deep neural network scheme is proposed to help clinicians solve a difficult diagnosis problem in neurology. The proposed multilayer architecture includes a feature engineering step (from time-frequency transformation), a double compressing stage trained by unsupervised learning, and a classification
[...] Read more.
The use of a deep neural network scheme is proposed to help clinicians solve a difficult diagnosis problem in neurology. The proposed multilayer architecture includes a feature engineering step (from time-frequency transformation), a double compressing stage trained by unsupervised learning, and a classification stage trained by supervised learning. After fine-tuning, the deep network is able to discriminate well the class of patients from controls with around 90% sensitivity and specificity. This deep model gives better classification performance than some other standard discriminative learning algorithms. As in clinical problems there is a need for explaining decisions, an effort has been carried out to qualitatively justify the classification results. The main novelty of this paper is indeed to give an entropic interpretation of how the deep scheme works and reach the final decision. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1a

Open AccessArticle Minimising the Kullback–Leibler Divergence for Model Selection in Distributed Nonlinear Systems
Entropy 2018, 20(2), 51; doi:10.3390/e20020051
Received: 21 December 2017 / Revised: 17 January 2018 / Accepted: 18 January 2018 / Published: 23 January 2018
PDF Full-text (1430 KB) | HTML Full-text | XML Full-text
Abstract
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of
[...] Read more.
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle A Micro-Scale Investigation on the Behaviors of Asphalt Mixtures under Freeze-Thaw Cycles Using Entropy Theory and a Computerized Tomography Scanning Technique
Entropy 2018, 20(2), 68; doi:10.3390/e20020068
Received: 30 October 2017 / Revised: 31 December 2017 / Accepted: 12 January 2018 / Published: 23 January 2018
PDF Full-text (4306 KB) | HTML Full-text | XML Full-text
Abstract
The thermodynamic behavior of asphalt mixtures is critical to the engineers since it directly relates to the damage in asphalt mixtures. However, most of the current research of the freeze-thaw damage of asphalt mixtures is focused on the bulk body from the macroscale
[...] Read more.
The thermodynamic behavior of asphalt mixtures is critical to the engineers since it directly relates to the damage in asphalt mixtures. However, most of the current research of the freeze-thaw damage of asphalt mixtures is focused on the bulk body from the macroscale and lacks a fundamental understanding of the thermodynamic behaviors of asphalt mixtures from the microscale perspective. In this paper, to identify the important thermodynamic behaviors of asphalt mixtures under freeze-thaw loading cycle, the information entropy theory, an X-ray computerized tomography (CT) scanner and digital image processing technology are employed. The voids, the average size of the voids, the connected porosity, and the void number are extracted according to the scanned images. Based on the experiments and the CT scanned images, the information entropy evolution of the asphalt mixtures under different freeze-thaw cycles is calculated and the relationship between the change of information entropy and the pore structure characteristics is established. Then, the influences of different freezing and thawing conditions on the thermodynamic behaviors of asphalt mixtures are compared. The combination of information entropy theory and CT scanning technique proposed in this paper provides an innovative approach to investigate the thermodynamics behaviors of asphalt mixtures and a new way to analyze the freeze-thaw damage in asphalt mixtures. Full article
Figures

Figure 1

Open AccessArticle Entropy of Entanglement between Quantum Phases of a Three-Level Matter-Radiation Interaction Model
Entropy 2018, 20(2), 72; doi:10.3390/e20020072
Received: 21 November 2017 / Revised: 4 January 2018 / Accepted: 5 January 2018 / Published: 24 January 2018
PDF Full-text (13726 KB) | HTML Full-text | XML Full-text
Abstract
We show that the entropy of entanglement is sensitive to the coherent quantum phase transition between normal and super-radiant regions of a system of a finite number of three-level atoms interacting in a dipolar approximation with a one-mode electromagnetic field. The atoms are
[...] Read more.
We show that the entropy of entanglement is sensitive to the coherent quantum phase transition between normal and super-radiant regions of a system of a finite number of three-level atoms interacting in a dipolar approximation with a one-mode electromagnetic field. The atoms are treated as semi-distinguishable using different cooperation numbers and representations of SU(3), variables which are relevant to the sensitivity of the entropy with the transition. The results are computed for all three possible configurations ( Ξ , Λ and V) of the three-level atoms. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Open AccessArticle A Novel Multivariate Sample Entropy Algorithm for Modeling Time Series Synchronization
Entropy 2018, 20(2), 82; doi:10.3390/e20020082
Received: 29 November 2017 / Revised: 15 January 2018 / Accepted: 19 January 2018 / Published: 24 January 2018
PDF Full-text (3297 KB) | HTML Full-text | XML Full-text
Abstract
Approximate and sample entropy (AE and SE) provide robust measures of the deterministic or stochastic content of a time series (regularity), as well as the degree of structural richness (complexity), through operations at multiple data scales. Despite the success of the univariate algorithms,
[...] Read more.
Approximate and sample entropy (AE and SE) provide robust measures of the deterministic or stochastic content of a time series (regularity), as well as the degree of structural richness (complexity), through operations at multiple data scales. Despite the success of the univariate algorithms, multivariate sample entropy (mSE) algorithms are still in their infancy and have considerable shortcomings. Not only are existing mSE algorithms unable to analyse within- and cross-channel dynamics, they can counter-intuitively interpret increased correlation between variates as decreased regularity. To this end, we first revisit the embedding of multivariate delay vectors (DVs), critical to ensuring physically meaningful and accurate analysis. We next propose a novel mSE algorithm and demonstrate its improved performance over existing work, for synthetic data and for classifying wake and sleep states from real-world physiological data. It is furthermore revealed that, unlike other tools, such as the correlation of phase synchrony, synchronized regularity dynamics are uniquely identified via mSE analysis. In addition, a model for the operation of this novel algorithm in the presence of white Gaussian noise is presented, which, in contrast to the existing algorithms, reveals for the first time that increasing correlation between different variates reduces entropy. Full article
Figures

Figure 1

Open AccessArticle On the Holographic Bound in Newtonian Cosmology
Entropy 2018, 20(2), 83; doi:10.3390/e20020083
Received: 11 December 2017 / Revised: 17 January 2018 / Accepted: 23 January 2018 / Published: 25 January 2018
PDF Full-text (251 KB) | HTML Full-text | XML Full-text
Abstract
The holographic principle sets an upper bound on the total (Boltzmann) entropy content of the Universe at around 10123kB (kB being Boltzmann’s constant). In this work we point out the existence of a remarkable duality between nonrelativistic quantum
[...] Read more.
The holographic principle sets an upper bound on the total (Boltzmann) entropy content of the Universe at around 10 123 k B ( k B being Boltzmann’s constant). In this work we point out the existence of a remarkable duality between nonrelativistic quantum mechanics on the one hand, and Newtonian cosmology on the other. Specifically, nonrelativistic quantum mechanics has a quantum probability fluid that exactly mimics the behaviour of the cosmological fluid, the latter considered in the Newtonian approximation. One proves that the equations governing the cosmological fluid (the Euler equation and the continuity equation) become the very equations that govern the quantum probability fluid after applying the Madelung transformation to the Schroedinger wavefunction. Under the assumption that gravitational equipotential surfaces can be identified with isoentropic surfaces, this model allows for a simple computation of the gravitational entropy of a Newtonian Universe. In a first approximation, we model the cosmological fluid as the quantum probability fluid of free Schroedinger waves. We find that this model Universe saturates the holographic bound. As a second approximation, we include the Hubble expansion of the galaxies. The corresponding Schroedinger waves lead to a value of the entropy lying three orders of magnitude below the holographic bound. Current work on a fully relativistic extension of our present model can be expected to yield results in even better agreement with empirical estimates of the entropy of the Universe. Full article
Open AccessArticle Residual Entropy and Critical Behavior of Two Interacting Boson Species in a Double Well
Entropy 2018, 20(2), 84; doi:10.3390/e20020084
Received: 21 December 2017 / Revised: 18 January 2018 / Accepted: 23 January 2018 / Published: 25 January 2018
PDF Full-text (1517 KB) | HTML Full-text | XML Full-text
Abstract
Motivated by the importance of entanglement and correlation indicators in the analysis of quantum systems, we study the equilibrium and the bipartite residual entropy in a two-species Bose–Hubbard dimer when the spatial phase separation of the two species takes place. We consider both
[...] Read more.
Motivated by the importance of entanglement and correlation indicators in the analysis of quantum systems, we study the equilibrium and the bipartite residual entropy in a two-species Bose–Hubbard dimer when the spatial phase separation of the two species takes place. We consider both the zero and non-zero-temperature regime. We present different kinds of residual entropies (each one associated with a different way of partitioning the system), and we show that they strictly depend on the specific quantum phase characterizing the two species (supermixed, mixed or demixed) even at finite temperature. To provide a deeper physical insight into the zero-temperature scenario, we apply the fully-analytical variational approach based on su(2) coherent states and provide a considerably good approximation of the entanglement entropy. Finally, we show that the effectiveness of bipartite residual entropy as a critical indicator at non-zero temperature is unchanged when considering a restricted combination of energy eigenstates. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessArticle Is Natural Language a Perigraphic Process? The Theorem about Facts and Words Revisited
Entropy 2018, 20(2), 85; doi:10.3390/e20020085
Received: 4 January 2018 / Revised: 23 January 2018 / Accepted: 24 January 2018 / Published: 26 January 2018
PDF Full-text (341 KB) | HTML Full-text | XML Full-text
Abstract
As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic
[...] Read more.
As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the Prediction by Partial Matching (PPM) compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence, we suppose that natural language considered as a process is not only non-Markov but also perigraphic. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle A New Chaotic System with a Self-Excited Attractor: Entropy Measurement, Signal Encryption, and Parameter Estimation
Entropy 2018, 20(2), 86; doi:10.3390/e20020086
Received: 28 December 2017 / Revised: 19 January 2018 / Accepted: 21 January 2018 / Published: 27 January 2018
Cited by 2 | PDF Full-text (8699 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we introduce a new chaotic system that is used for an engineering application of the signal encryption. It has some interesting features, and its successful implementation and manufacturing were performed via a real circuit as a random number generator. In
[...] Read more.
In this paper, we introduce a new chaotic system that is used for an engineering application of the signal encryption. It has some interesting features, and its successful implementation and manufacturing were performed via a real circuit as a random number generator. In addition, we provide a parameter estimation method to extract chaotic model parameters from the real data of the chaotic circuit. The parameter estimation method is based on the attractor distribution modeling in the state space, which is compatible with the chaotic system characteristics. Here, a Gaussian mixture model (GMM) is used as a main part of cost function computations in the parameter estimation method. To optimize the cost function, we also apply two recent efficient optimization methods: WOA (Whale Optimization Algorithm), and MVO (Multi-Verse Optimizer) algorithms. The results show the success of the parameter estimation procedure. Full article
Figures

Figure 1

Open AccessArticle Calculation of the Connected Dominating Set Considering Vertex Importance Metrics
Entropy 2018, 20(2), 87; doi:10.3390/e20020087
Received: 21 December 2017 / Revised: 17 January 2018 / Accepted: 25 January 2018 / Published: 28 January 2018
PDF Full-text (1351 KB) | HTML Full-text | XML Full-text
Abstract
The computation of a set constituted by few vertices to define a virtual backbone supporting information interchange is a problem that arises in many areas when analysing networks of different natures, like wireless, brain, or social networks. Recent papers propose obtaining such a
[...] Read more.
The computation of a set constituted by few vertices to define a virtual backbone supporting information interchange is a problem that arises in many areas when analysing networks of different natures, like wireless, brain, or social networks. Recent papers propose obtaining such a set of vertices by computing the connected dominating set (CDS) of a graph. In recent works, the CDS has been obtained by considering that all vertices exhibit similar characteristics. However, that assumption is not valid for complex networks in which their vertices can play different roles. Therefore, we propose finding the CDS by taking into account several metrics which measure the importance of each network vertex e.g., error probability, entropy, or entropy variation (EV). Full article
Figures

Figure 1

Open AccessArticle Traffic Offloading in Unlicensed Spectrum for 5G Cellular Network: A Two-Layer Game Approach
Entropy 2018, 20(2), 88; doi:10.3390/e20020088
Received: 7 December 2017 / Revised: 21 January 2018 / Accepted: 25 January 2018 / Published: 28 January 2018
PDF Full-text (893 KB) | HTML Full-text | XML Full-text
Abstract
Licensed Assisted Access (LAA) is considered one of the latest groundbreaking innovations to provide high performance in future 5G. Coexistence schemes such as Listen Before Talk (LBT) and Carrier Sensing and Adaptive Transmission (CSAT) have been proven to be good methods to share
[...] Read more.
Licensed Assisted Access (LAA) is considered one of the latest groundbreaking innovations to provide high performance in future 5G. Coexistence schemes such as Listen Before Talk (LBT) and Carrier Sensing and Adaptive Transmission (CSAT) have been proven to be good methods to share spectrums, and they are WiFi friendly. In this paper, a modified LBT-based CSAT scheme is proposed which can effectively reduce the collision at the moment when Long Term Evolution (LTE) starts to transmit data in CSAT mode. To make full use of the valuable spectrum resources, the throughput of both LAA and WiFi systems should be improved. Thus, a two-layer Coalition-Auction Game-based Transaction (CAGT) mechanism is proposed in this paper to optimize the performance of the two systems. In the first layer, a coalition among Access Points (APs) is built to balance the WiFi stations and maximize the WiFi throughput. The main idea of the devised coalition forming is to merge the light-loaded APs with heavy-loaded APs into a coalition; consequently, the data of the overloaded APs can be offloaded to the light-loaded APs. Next, an auction game between the LAA and WiFi systems is used to gain a win–win strategy, in which, LAA Base Station (BS) is the auctioneer and AP coalitions are bidders. Thus, the throughput of both systems are improved. Simulation results demonstrate that the proposed scheme in this paper can improve the performance of both two systems effectively. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle Comparative Evaluation of Integrated Waste Heat Utilization Systems for Coal-Fired Power Plants Based on In-Depth Boiler-Turbine Integration and Organic Rankine Cycle
Entropy 2018, 20(2), 89; doi:10.3390/e20020089
Received: 7 November 2017 / Revised: 30 December 2017 / Accepted: 20 January 2018 / Published: 29 January 2018
Cited by 1 | PDF Full-text (1998 KB) | HTML Full-text | XML Full-text
Abstract
To maximize the system-level heat integration, three retrofit concepts of waste heat recovery via organic Rankine cycle (ORC), in-depth boiler-turbine integration, and coupling of both are proposed, analyzed and comprehensively compared in terms of thermodynamic and economic performances. For thermodynamic analysis, exergy analysis
[...] Read more.
To maximize the system-level heat integration, three retrofit concepts of waste heat recovery via organic Rankine cycle (ORC), in-depth boiler-turbine integration, and coupling of both are proposed, analyzed and comprehensively compared in terms of thermodynamic and economic performances. For thermodynamic analysis, exergy analysis is employed with grand composite curves illustrated to identify how the systems are fundamentally and quantitatively improved, and to highlight key processes for system improvement. For economic analysis, annual revenue and investment payback period are calculated based on the estimation of capital investment of each component to identify the economic feasibility and competitiveness of each retrofit concept proposed. The results show that the in-depth boiler-turbine integration achieves a better temperature match of heat flows involved for different fluids and multi-stage air preheating, thus a significant improvement of power output (23.99 MW), which is much larger than that of the system with only ORC (6.49 MW). This is mainly due to the limitation of the ultra-low temperature (from 135 to 75 °C) heat available from the flue gas for ORC. The thermodynamic improvement is mostly contributed by the reduction of exergy destruction within the boiler subsystem, which is eventually converted to mechanical power; while the exergy destruction within the turbine system is almost not changed for the three concepts. The selection of ORC working fluids is performed to maximize the power output. Due to the low-grade heat source, the cycle with R11 offers the largest additional net power generation but is not significantly better than the other preselected working fluids. Economically, the in-depth boiler-turbine integration is the most economic completive solution with a payback period of only 0.78 year. The ORC concept is less attractive for a sole application due to a long payback time (2.26 years). However, by coupling both concepts, a net power output of 26.51 MW and a payback time of almost one year are achieved, which may promote the large-scale production and deployment of ORC with a cost reduction and competitiveness enhancement. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle The Shannon Entropy Trend of a Fish System Estimated by a Machine Vision Approach Seems to Reflect the Molar Se:Hg Ratio of Its Feed
Entropy 2018, 20(2), 90; doi:10.3390/e20020090
Received: 20 November 2017 / Revised: 16 January 2018 / Accepted: 24 January 2018 / Published: 29 January 2018
PDF Full-text (783 KB) | HTML Full-text | XML Full-text
Abstract
The present study investigates the suitability of a machine vision-based method to detect deviations in the Shannon entropy (SE) of a European seabass (Dicentrarchus labrax) biological system fed with different selenium:mercury (Se:Hg) molar ratios. Four groups of fish were fed during
[...] Read more.
The present study investigates the suitability of a machine vision-based method to detect deviations in the Shannon entropy (SE) of a European seabass (Dicentrarchus labrax) biological system fed with different selenium:mercury (Se:Hg) molar ratios. Four groups of fish were fed during 14 days with commercial feed (control) and with the same feed spiked with 0.5, 5 and 10 mg of MeHg per kg, giving Se:Hg molar ratios of 29.5 (control-C1); 6.6, 0.8 and 0.4 (C2, C3 and C4). The basal SE of C1 and C2 (Se:Hg > 1) tended to increase during the experimental period, while that of C3 and C4 (Se:Hg < 1) tended to decrease. In addition, the differences in the SE of the four systems in response to a stochastic event minus that of the respective basal states were less pronounced in the systems fed with Se:Hg molar ratios lower than one (C3 and C4). These results indicate that the SE may be a suitable indicator for the prediction of seafood safety and fish health (i.e., the Se:Hg molar ratio and not the Hg concentration alone) prior to the displaying of pathological symptoms. We hope that this work can serve as a first step for further investigations to confirm and validate the present results prior to their potential implementation in practical settings. Full article
(This article belongs to the Special Issue Selected Papers from IWOBI—Entropy-Based Applied Signal Processing)
Figures

Figure 1

Open AccessFeature PaperArticle Strong- and Weak-Universal Critical Behaviour of a Mixed-Spin Ising Model with Triplet Interactions on the Union Jack (Centered Square) Lattice
Entropy 2018, 20(2), 91; doi:10.3390/e20020091
Received: 30 December 2017 / Revised: 26 January 2018 / Accepted: 26 January 2018 / Published: 29 January 2018
PDF Full-text (625 KB) | HTML Full-text | XML Full-text
Abstract
The mixed spin-1/2 and spin-S Ising model on the Union Jack (centered square) lattice with four different three-spin (triplet) interactions and the uniaxial single-ion anisotropy is exactly solved by establishing a rigorous mapping equivalence with the corresponding zero-field (symmetric) eight-vertex model on
[...] Read more.
The mixed spin-1/2 and spin-S Ising model on the Union Jack (centered square) lattice with four different three-spin (triplet) interactions and the uniaxial single-ion anisotropy is exactly solved by establishing a rigorous mapping equivalence with the corresponding zero-field (symmetric) eight-vertex model on a dual square lattice. A rigorous proof of the aforementioned exact mapping equivalence is provided by two independent approaches exploiting either a graph-theoretical or spin representation of the zero-field eight-vertex model. An influence of the interaction anisotropy as well as the uniaxial single-ion anisotropy on phase transitions and critical phenomena is examined in particular. It is shown that the considered model exhibits a strong-universal critical behaviour with constant critical exponents when considering the isotropic model with four equal triplet interactions or the anisotropic model with one triplet interaction differing from the other three. The anisotropic models with two different triplet interactions, which are pairwise equal to each other, contrarily exhibit a weak-universal critical behaviour with critical exponents continuously varying with a relative strength of the triplet interactions as well as the uniaxial single-ion anisotropy. It is evidenced that the variations of critical exponents of the mixed-spin Ising models with the integer-valued spins S differ basically from their counterparts with the half-odd-integer spins S. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Spectral and Energy Efficient Low-Overhead Uplink and Downlink Channel Estimation for 5G Massive MIMO Systems
Entropy 2018, 20(2), 92; doi:10.3390/e20020092
Received: 2 December 2017 / Revised: 17 January 2018 / Accepted: 20 January 2018 / Published: 30 January 2018
Cited by 1 | PDF Full-text (8544 KB) | HTML Full-text | XML Full-text
Abstract
Uplink and Downlink channel estimation in massive Multiple Input Multiple Output (MIMO) systems is an intricate issue because of the increasing channel matrix dimensions. The channel feedback overhead using traditional codebook schemes is very large, which consumes more bandwidth and decreases the overall
[...] Read more.
Uplink and Downlink channel estimation in massive Multiple Input Multiple Output (MIMO) systems is an intricate issue because of the increasing channel matrix dimensions. The channel feedback overhead using traditional codebook schemes is very large, which consumes more bandwidth and decreases the overall system efficiency. The purpose of this paper is to decrease the channel estimation overhead by taking the advantage of sparse attributes and also to optimize the Energy Efficiency (EE) of the system. To cope with this issue, we propose a novel approach by using Compressed-Sensing (CS), Block Iterative-Support-Detection (Block-ISD), Angle-of-Departure (AoD) and Structured Compressive Sampling Matching Pursuit (S-CoSaMP) algorithms to reduce the channel estimation overhead and compare them with the traditional algorithms. The CS uses temporal-correlation of time-varying channels to produce Differential-Channel Impulse Response (DCIR) among two CIRs that are adjacent in time-slots. DCIR has greater sparsity than the conventional CIRs as it can be easily compressed. The Block-ISD uses spatial-correlation of the channels to obtain the block-sparsity which results in lower pilot-overhead. AoD quantizes the channels whose path-AoDs variation is slower than path-gains and such information is utilized for reducing the overhead. S-CoSaMP deploys structured-sparsity to obtain reliable Channel-State-Information (CSI). MATLAB simulation results show that the proposed CS based algorithms reduce the feedback and pilot-overhead by a significant percentage and also improve the system capacity as compared with the traditional algorithms. Moreover, the EE level increases with increasing Base Station (BS) density, UE density and lowering hardware impairments level. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Prediction of the ORC Working Fluid’s Temperature-Entropy Saturation Boundary Using Redlich-Kwong Equation of State
Entropy 2018, 20(2), 93; doi:10.3390/e20020093
Received: 17 January 2018 / Revised: 26 January 2018 / Accepted: 27 January 2018 / Published: 30 January 2018
PDF Full-text (1076 KB) | HTML Full-text | XML Full-text
Abstract
The shape of the working fluid’s temperature-entropy saturation boundary has a strong influence, not only on the process parameters and efficiency of the Organic Rankine Cycle, but also on the design (the layout) of the equipment. In this paper, working fluids are modelled
[...] Read more.
The shape of the working fluid’s temperature-entropy saturation boundary has a strong influence, not only on the process parameters and efficiency of the Organic Rankine Cycle, but also on the design (the layout) of the equipment. In this paper, working fluids are modelled by the Redlich-Kwong equation of state. It is demonstrated that a limiting isochoric heat capacity might exist between dry and wet fluids. With the Redlich-Kwong equation of state, this limit can be predicted with good accuracy for several fluids, including alkanes. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Heartbeats Do Not Make Good Pseudo-Random Number Generators: An Analysis of the Randomness of Inter-Pulse Intervals
Entropy 2018, 20(2), 94; doi:10.3390/e20020094
Received: 21 December 2017 / Revised: 24 January 2018 / Accepted: 26 January 2018 / Published: 30 January 2018
PDF Full-text (414 KB) | HTML Full-text | XML Full-text
Abstract
The proliferation of wearable and implantable medical devices has given rise to an interest in developing security schemes suitable for these systems and the environment in which they operate. One area that has received much attention lately is the use of (human) biological
[...] Read more.
The proliferation of wearable and implantable medical devices has given rise to an interest in developing security schemes suitable for these systems and the environment in which they operate. One area that has received much attention lately is the use of (human) biological signals as the basis for biometric authentication, identification and the generation of cryptographic keys. The heart signal (e.g., as recorded in an electrocardiogram) has been used by several researchers in the last few years. Specifically, the so-called Inter-Pulse Intervals (IPIs), which is the time between two consecutive heartbeats, have been repeatedly pointed out as a potentially good source of entropy and are at the core of various recent authentication protocols. In this work, we report the results of a large-scale statistical study to determine whether such an assumption is (or not) upheld. For this, we have analyzed 19 public datasets of heart signals from the Physionet repository, spanning electrocardiograms from 1353 subjects sampled at different frequencies and with lengths that vary between a few minutes and several hours. We believe this is the largest dataset on this topic analyzed in the literature. We have then applied a standard battery of randomness tests to the extracted IPIs. Under the algorithms described in this paper and after analyzing these 19 public ECG datasets, our results raise doubts about the use of IPI values as a good source of randomness for cryptographic purposes. This has repercussions both in the security of some of the protocols proposed up to now and also in the design of future IPI-based schemes. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Scaling-Laws of Flow Entropy with Topological Metrics of Water Distribution Networks
Entropy 2018, 20(2), 95; doi:10.3390/e20020095
Received: 28 December 2017 / Revised: 24 January 2018 / Accepted: 26 January 2018 / Published: 30 January 2018
Cited by 1 | PDF Full-text (1627 KB) | HTML Full-text | XML Full-text
Abstract
Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal
[...] Read more.
Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal network design procedures. In this paper, the scaling properties of flow entropy of water distribution networks with their size and other topological metrics are studied. To such aim, flow entropy, maximum flow entropy, link density and average path length have been evaluated for a set of 22 networks, both real and synthetic, with different size and topology. The obtained results led to identify suitable scaling laws of flow entropy and maximum flow entropy with water distribution network size, in the form of power–laws. The obtained relationships allow comparing the flow entropy of water distribution networks with different size, and provide an easy tool to define the maximum achievable entropy of a specific water distribution network. An example of application of the obtained relationships to the design of a water distribution network is provided, showing how, with a constrained multi-objective optimization procedure, a tradeoff between network cost and robustness is easily identified. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Particle Swarm Optimization and Uncertainty Assessment in Inverse Problems
Entropy 2018, 20(2), 96; doi:10.3390/e20020096
Received: 29 November 2017 / Revised: 9 January 2018 / Accepted: 25 January 2018 / Published: 30 January 2018
Cited by 1 | PDF Full-text (1896 KB) | HTML Full-text | XML Full-text
Abstract
Most inverse problems in the industry (and particularly in geophysical exploration) are highly underdetermined because the number of model parameters too high to achieve accurate data predictions and because the sampling of the data space is scarce and incomplete; it is always affected
[...] Read more.
Most inverse problems in the industry (and particularly in geophysical exploration) are highly underdetermined because the number of model parameters too high to achieve accurate data predictions and because the sampling of the data space is scarce and incomplete; it is always affected by different kinds of noise. Additionally, the physics of the forward problem is a simplification of the reality. All these facts result in that the inverse problem solution is not unique; that is, there are different inverse solutions (called equivalent), compatible with the prior information that fits the observed data within similar error bounds. In the case of nonlinear inverse problems, these equivalent models are located in disconnected flat curvilinear valleys of the cost-function topography. The uncertainty analysis consists of obtaining a representation of this complex topography via different sampling methodologies. In this paper, we focus on the use of a particle swarm optimization (PSO) algorithm to sample the region of equivalence in nonlinear inverse problems. Although this methodology has a general purpose, we show its application for the uncertainty assessment of the solution of a geophysical problem concerning gravity inversion in sedimentary basins, showing that it is possible to efficiently perform this task in a sampling-while-optimizing mode. Particularly, we explain how to use and analyze the geophysical models sampled by exploratory PSO family members to infer different descriptors of nonlinear uncertainty. Full article
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
Figures

Figure 1

Open AccessArticle Feature Selection based on the Local Lift Dependence Scale
Entropy 2018, 20(2), 97; doi:10.3390/e20020097
Received: 11 November 2017 / Revised: 19 January 2018 / Accepted: 25 January 2018 / Published: 30 January 2018
PDF Full-text (428 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper uses a classical approach to feature selection: minimization of a cost function applied on estimated joint distributions. However, in this new formulation, the optimization search space is extended. The original search space is the Boolean lattice of features sets (BLFS), while
[...] Read more.
This paper uses a classical approach to feature selection: minimization of a cost function applied on estimated joint distributions. However, in this new formulation, the optimization search space is extended. The original search space is the Boolean lattice of features sets (BLFS), while the extended one is a collection of Boolean lattices of ordered pairs (CBLOP), that is (features, associated value), indexed by the elements of the BLFS. In this approach, we may not only select the features that are most related to a variable Y, but also select the values of the features that most influence the variable or that are most prone to have a specific value of Y. A local formulation of Shannon’s mutual information, which generalizes Shannon’s original definition, is applied on a CBLOP to generate a multiple resolution scale for characterizing variable dependence, the Local Lift Dependence Scale (LLDS). The main contribution of this paper is to define and apply the LLDS to analyse local properties of joint distributions that are neglected by the classical Shannon’s global measure in order to select features. This approach is applied to select features based on the dependence between: i—the performance of students on university entrance exams and on courses of their first semester in the university; ii—the congress representative party and his vote on different matters; iii—the cover type of terrains and several terrain properties. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Nonequilibrium Entropic Bounds for Darwinian Replicators
Entropy 2018, 20(2), 98; doi:10.3390/e20020098
Received: 25 November 2017 / Revised: 15 January 2018 / Accepted: 26 January 2018 / Published: 31 January 2018
PDF Full-text (1213 KB) | HTML Full-text | XML Full-text
Abstract
Life evolved on our planet by means of a combination of Darwinian selection and innovations leading to higher levels of complexity. The emergence and selection of replicating entities is a central problem in prebiotic evolution. Theoretical models have shown how populations of different
[...] Read more.
Life evolved on our planet by means of a combination of Darwinian selection and innovations leading to higher levels of complexity. The emergence and selection of replicating entities is a central problem in prebiotic evolution. Theoretical models have shown how populations of different types of replicating entities exclude or coexist with other classes of replicators. Models are typically kinetic, based on standard replicator equations. On the other hand, the presence of thermodynamical constraints for these systems remain an open question. This is largely due to the lack of a general theory of statistical methods for systems far from equilibrium. Nonetheless, a first approach to this problem has been put forward in a series of novel developements falling under the rubric of the extended second law of thermodynamics. The work presented here is twofold: firstly, we review this theoretical framework and provide a brief description of the three fundamental replicator types in prebiotic evolution: parabolic, malthusian and hyperbolic. Secondly, we employ these previously mentioned techinques to explore how replicators are constrained by thermodynamics. Finally, we comment and discuss where further research should be focused on. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle On the Use of Normalized Compression Distances for Image Similarity Detection
Entropy 2018, 20(2), 99; doi:10.3390/e20020099
Received: 2 December 2017 / Revised: 21 January 2018 / Accepted: 22 January 2018 / Published: 31 January 2018
PDF Full-text (3872 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the usefulness of the normalized compression distance (NCD) for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the
[...] Read more.
This paper investigates the usefulness of the normalized compression distance (NCD) for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated) versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center) and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average). The promising vector configurations (geometric transform, lossless compressor) are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation). Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Fully Adaptive Particle Filtering Algorithm for Damage Diagnosis and Prognosis
Entropy 2018, 20(2), 100; doi:10.3390/e20020100
Received: 14 November 2017 / Revised: 18 January 2018 / Accepted: 24 January 2018 / Published: 31 January 2018
PDF Full-text (2060 KB) | HTML Full-text | XML Full-text
Abstract
A fully adaptive particle filtering algorithm is proposed in this paper which is capable of updating both state process models and measurement models separately and simultaneously. The approach is a significant step toward more realistic online monitoring or tracking damage. The majority of
[...] Read more.
A fully adaptive particle filtering algorithm is proposed in this paper which is capable of updating both state process models and measurement models separately and simultaneously. The approach is a significant step toward more realistic online monitoring or tracking damage. The majority of the existing methods for Bayes filtering are based on predefined and fixed state process and measurement models. Simultaneous estimation of both state and model parameters has gained attention in recent literature. Some works have been done on updating the state process model. However, not many studies exist regarding an update of the measurement model. In most of the real-world applications, the correlation between measurements and the hidden state of damage is not defined in advance and, therefore, presuming an offline fixed measurement model is not promising. The proposed approach is based on optimizing relative entropy or Kullback–Leibler divergence through a particle filtering algorithm. The proposed algorithm is successfully applied to a case study of online fatigue damage estimation in composite materials. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Figures

Figure 1

Open AccessArticle The Complex Neutrosophic Soft Expert Relation and Its Multiple Attribute Decision-Making Method
Entropy 2018, 20(2), 101; doi:10.3390/e20020101
Received: 24 December 2017 / Revised: 15 January 2018 / Accepted: 30 January 2018 / Published: 31 January 2018
Cited by 1 | PDF Full-text (805 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces a novel soft computing technique, called the complex neutrosophic soft expert relation (CNSER), to evaluate the degree of interaction between two hybrid models called complex neutrosophic soft expert sets (CNSESs). CNSESs are used to represent two-dimensional data that are imprecise,
[...] Read more.
This paper introduces a novel soft computing technique, called the complex neutrosophic soft expert relation (CNSER), to evaluate the degree of interaction between two hybrid models called complex neutrosophic soft expert sets (CNSESs). CNSESs are used to represent two-dimensional data that are imprecise, uncertain, incomplete and indeterminate. Moreover, it has a mechanism to incorporate the parameter set and the opinions of all experts in one model, thus making it highly suitable for use in decision-making problems where the time factor plays a key role in determining the final decision. The complex neutrosophic soft expert set and complex neutrosophic soft expert relation are both defined. Utilizing the properties of CNSER introduced, an empirical study is conducted on the relationship between the variability of the currency exchange rate and Malaysian exports and the time frame (phase) of the interaction between these two variables. This study is supported further by an algorithm to determine the type and the degree of this relationship. A comparison between different existing relations and CNSER to show the ascendancy of our proposed CNSER is provided. Then, the notion of the inverse, complement and composition of CNSERs along with some related theorems and properties are introduced. Finally, we define the symmetry, transitivity and reflexivity of CNSERs, as well as the equivalence relation and equivalence classes on CNSESs. Some interesting properties are also obtained. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Open AccessArticle Mutual Information and Information Gating in Synfire Chains
Entropy 2018, 20(2), 102; doi:10.3390/e20020102
Received: 22 December 2017 / Revised: 29 January 2018 / Accepted: 30 January 2018 / Published: 1 February 2018
PDF Full-text (1079 KB) | HTML Full-text | XML Full-text
Abstract
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other
[...] Read more.
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the graded transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessFeature PaperArticle Glass Transition, Crystallization of Glass-Forming Melts, and Entropy
Entropy 2018, 20(2), 103; doi:10.3390/e20020103
Received: 21 December 2017 / Revised: 22 January 2018 / Accepted: 26 January 2018 / Published: 1 February 2018
Cited by 1 | PDF Full-text (665 KB) | HTML Full-text | XML Full-text
Abstract
A critical analysis of possible (including some newly proposed) definitions of the vitreous state and the glass transition is performed and an overview of kinetic criteria of vitrification is presented. On the basis of these results, recent controversial discussions on the possible values
[...] Read more.
A critical analysis of possible (including some newly proposed) definitions of the vitreous state and the glass transition is performed and an overview of kinetic criteria of vitrification is presented. On the basis of these results, recent controversial discussions on the possible values of the residual entropy of glasses are reviewed. Our conclusion is that the treatment of vitrification as a process of continuously breaking ergodicity with entropy loss and a residual entropy tending to zero in the limit of zero absolute temperature is in disagreement with the absolute majority of experimental and theoretical investigations of this process and the nature of the vitreous state. This conclusion is illustrated by model computations. In addition to the main conclusion derived from these computations, they are employed as a test for several suggestions concerning the behavior of thermodynamic coefficients in the glass transition range. Further, a brief review is given on possible ways of resolving the Kauzmann paradox and its implications with respect to the validity of the third law of thermodynamics. It is shown that neither in its primary formulations nor in its consequences does the Kauzmann paradox result in contradictions with any basic laws of nature. Such contradictions are excluded by either crystallization (not associated with a pseudospinodal as suggested by Kauzmann) or a conventional (and not an ideal) glass transition. Some further so far widely unexplored directions of research on the interplay between crystallization and glass transition are anticipated, in which entropy may play—beyond the topics widely discussed and reviewed here—a major role. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessArticle Patent Keyword Extraction Algorithm Based on Distributed Representation for Patent Classification
Entropy 2018, 20(2), 104; doi:10.3390/e20020104
Received: 8 November 2017 / Revised: 24 January 2018 / Accepted: 30 January 2018 / Published: 2 February 2018
PDF Full-text (1713 KB) | HTML Full-text | XML Full-text
Abstract
Many text mining tasks such as text retrieval, text summarization, and text comparisons depend on the extraction of representative keywords from the main text. Most existing keyword extraction algorithms are based on discrete bag-of-words type of word representation of the text. In this
[...] Read more.
Many text mining tasks such as text retrieval, text summarization, and text comparisons depend on the extraction of representative keywords from the main text. Most existing keyword extraction algorithms are based on discrete bag-of-words type of word representation of the text. In this paper, we propose a patent keyword extraction algorithm (PKEA) based on the distributed Skip-gram model for patent classification. We also develop a set of quantitative performance measures for keyword extraction evaluation based on information gain and cross-validation, based on Support Vector Machine (SVM) classification, which are valuable when human-annotated keywords are not available. We used a standard benchmark dataset and a homemade patent dataset to evaluate the performance of PKEA. Our patent dataset includes 2500 patents from five distinct technological fields related to autonomous cars (GPS systems, lidar systems, object recognition systems, radar systems, and vehicle control systems). We compared our method with Frequency, Term Frequency-Inverse Document Frequency (TF-IDF), TextRank and Rapid Automatic Keyword Extraction (RAKE). The experimental results show that our proposed algorithm provides a promising way to extract keywords from patent texts for patent classification. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Open AccessArticle Why Bohmian Mechanics? One- and Two-Time Position Measurements, Bell Inequalities, Philosophy, and Physics
Entropy 2018, 20(2), 105; doi:10.3390/e20020105
Received: 21 December 2017 / Revised: 21 January 2018 / Accepted: 31 January 2018 / Published: 2 February 2018
Cited by 2 | PDF Full-text (533 KB) | HTML Full-text | XML Full-text
Abstract
In Bohmian mechanics, particles follow continuous trajectories, so two-time position correlations have been well defined. However, Bohmian mechanics predicts the violation of Bell inequalities. Motivated by this fact, we investigate position measurements in Bohmian mechanics by coupling the particles to macroscopic pointers. This
[...] Read more.
In Bohmian mechanics, particles follow continuous trajectories, so two-time position correlations have been well defined. However, Bohmian mechanics predicts the violation of Bell inequalities. Motivated by this fact, we investigate position measurements in Bohmian mechanics by coupling the particles to macroscopic pointers. This explains the violation of Bell inequalities despite two-time position correlations. We relate this fact to so-called surrealistic trajectories that, in our model, correspond to slowly moving pointers. Next, we emphasize that Bohmian mechanics, which does not distinguish between microscopic and macroscopic systems, implies that the quantum weirdness of quantum physics also shows up at the macro-scale. Finally, we discuss the fact that Bohmian mechanics is attractive to philosophers but not so much to physicists and argue that the Bohmian community is responsible for the latter. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Point Divergence Gain and Multidimensional Data Sequences Analysis
Entropy 2018, 20(2), 106; doi:10.3390/e20020106
Received: 27 December 2017 / Revised: 28 January 2018 / Accepted: 30 January 2018 / Published: 3 February 2018
PDF Full-text (31266 KB) | HTML Full-text | XML Full-text
Abstract
We introduce novel information-entropic variables—a Point Divergence Gain (Ωα(lm)), a Point Divergence Gain Entropy (Iα), and a Point Divergence Gain Entropy Density (Pα)—which are derived from the Rényi entropy
[...] Read more.
We introduce novel information-entropic variables—a Point Divergence Gain ( Ω α ( l m ) ), a Point Divergence Gain Entropy ( I α ), and a Point Divergence Gain Entropy Density ( P α )—which are derived from the Rényi entropy and describe spatio-temporal changes between two consecutive discrete multidimensional distributions. The behavior of Ω α ( l m ) is simulated for typical distributions and, together with I α and P α , applied in analysis and characterization of series of multidimensional datasets of computer-based and real images. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Detection of Parameter Change in Random Coefficient Integer-Valued Autoregressive Models
Entropy 2018, 20(2), 107; doi:10.3390/e20020107
Received: 5 December 2017 / Revised: 2 February 2018 / Accepted: 3 February 2018 / Published: 6 February 2018
PDF Full-text (381 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers the problem of testing for parameter change in random coefficient integer-valued autoregressive models. To overcome some size distortions of the existing estimate-based cumulative sum (CUSUM) test, we suggest estimating function-based test and residual-based CUSUM test. More specifically, we employ the
[...] Read more.
This paper considers the problem of testing for parameter change in random coefficient integer-valued autoregressive models. To overcome some size distortions of the existing estimate-based cumulative sum (CUSUM) test, we suggest estimating function-based test and residual-based CUSUM test. More specifically, we employ the estimating function of the conditional least squares estimator. Under the regularity conditions and the null hypothesis, we derive their limiting distributions, respectively. Simulation results demonstrate the validity of the proposed tests. A real data analysis is performed on the polio incidence data. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Sequential Change-Point Detection via Online Convex Optimization
Entropy 2018, 20(2), 108; doi:10.3390/e20020108
Received: 1 September 2017 / Revised: 2 December 2017 / Accepted: 5 February 2018 / Published: 7 February 2018
PDF Full-text (400 KB) | HTML Full-text | XML Full-text
Abstract
Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex
[...] Read more.
Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Tsallis Extended Thermodynamics Applied to 2-d Turbulence: Lévy Statistics and q-Fractional Generalized Kraichnanian Energy and Enstrophy Spectra
Entropy 2018, 20(2), 109; doi:10.3390/e20020109
Received: 2 January 2018 / Revised: 24 January 2018 / Accepted: 1 February 2018 / Published: 7 February 2018
PDF Full-text (2461 KB) | HTML Full-text | XML Full-text
Abstract
The extended thermodynamics of Tsallis is reviewed in detail and applied to turbulence. It is based on a generalization of the exponential and logarithmic functions with a parameter q. By applying this nonequilibrium thermodynamics, the Boltzmann-Gibbs thermodynamic approach of Kraichnan to 2-d
[...] Read more.
The extended thermodynamics of Tsallis is reviewed in detail and applied to turbulence. It is based on a generalization of the exponential and logarithmic functions with a parameter q. By applying this nonequilibrium thermodynamics, the Boltzmann-Gibbs thermodynamic approach of Kraichnan to 2-d turbulence is generalized. This physical modeling implies fractional calculus methods, obeying anomalous diffusion, described by Lévy statistics with q < 5/3 (sub diffusion), q = 5/3 (normal or Brownian diffusion) and q > 5/3 (super diffusion). The generalized energy spectrum of Kraichnan, occurring at small wave numbers k, now reveals the more general and precise result k−q. This corresponds well for q = 5/3 with the Kolmogorov-Oboukov energy spectrum and for q > 5/3 to turbulence with intermittency. The enstrophy spectrum, occurring at large wave numbers k, leads to a k3q power law, suggesting that large wave-number eddies are in thermodynamic equilibrium, which is characterized by q = 1, finally resulting in Kraichnan’s correct k3 enstrophy spectrum. The theory reveals in a natural manner a generalized temperature of turbulence, which in the non-equilibrium energy transfer domain decreases with wave number and shows an energy equipartition law with a constant generalized temperature in the equilibrium enstrophy transfer domain. The article contains numerous new results; some are stated in form of eight new (proven) propositions. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessFeature PaperArticle An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
Entropy 2018, 20(2), 110; doi:10.3390/e20020110
Received: 4 December 2017 / Revised: 16 January 2018 / Accepted: 30 January 2018 / Published: 7 February 2018
PDF Full-text (2860 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC) algorithms allow us to generate sets of samples
[...] Read more.
In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC) algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations. Thus, the computational cost of each iteration of the Gibbs sampler is significantly reduced while ensuring good mixing properties. Full article
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
Figures

Figure 1

Open AccessArticle A Coding Theorem for f-Separable Distortion Measures
Entropy 2018, 20(2), 111; doi:10.3390/e20020111
Received: 10 December 2017 / Revised: 29 January 2018 / Accepted: 2 February 2018 / Published: 8 February 2018
PDF Full-text (377 KB) | HTML Full-text | XML Full-text
Abstract
In this work we relax the usual separability assumption made in rate-distortion literature and propose f-separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f-separable distortion measures is to define an n-letter distortion measure
[...] Read more.
In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Figures

Figure 1

Open AccessArticle Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion
Entropy 2018, 20(2), 112; doi:10.3390/e20020112
Received: 15 November 2017 / Revised: 11 January 2018 / Accepted: 5 February 2018 / Published: 8 February 2018
PDF Full-text (1068 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make
[...] Read more.
In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM) model with a maximum correntropy criterion (MCC) to forecast the electricity consumption (EC). Firstly, the electricity characteristics of various industries are analyzed to determine the factors that mainly affect the changes in electricity, such as the gross domestic product (GDP), temperature, and so on. Secondly, according to the statistics of the status quo of the small sample data, the LSSVM model is employed as the prediction model. In order to optimize the parameters of the LSSVM model, we further use the local similarity function MCC as the evaluation criterion. Thirdly, we employ the K-fold cross-validation and grid searching methods to improve the learning ability. In the experiments, we have used the EC data of Shaanxi Province in China to evaluate the proposed prediction scheme, and the results show that the proposed prediction scheme outperforms the method based on the traditional LSSVM model. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessFeature PaperArticle Energy from Negentropy of Non-Cahotic Systems
Entropy 2018, 20(2), 113; doi:10.3390/e20020113
Received: 15 January 2018 / Revised: 6 February 2018 / Accepted: 7 February 2018 / Published: 9 February 2018
PDF Full-text (270 KB) | HTML Full-text | XML Full-text
Abstract
Negative contribution of entropy (negentropy) of a non-cahotic system, representing the potential of work, is a source of energy that can be transferred to an internal or inserted subsystem. In this case, the system loses order and its entropy increases. The subsystem increases
[...] Read more.
Negative contribution of entropy (negentropy) of a non-cahotic system, representing the potential of work, is a source of energy that can be transferred to an internal or inserted subsystem. In this case, the system loses order and its entropy increases. The subsystem increases its energy and can perform processes that otherwise would not happen, like, for instance, the nuclear fusion of inserted deuterons in liquid metal matrix, among many others. The role of positive and negative contributions of free energy and entropy are explored with their constraints. The energy available to an inserted subsystem during a transition from a non-equilibrium to the equilibrium chaotic state, when particle interaction (element of the system) is switched off, is evaluated. A few examples are given concerning some non-ideal systems and a possible application to the nuclear reaction screening problem is mentioned. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Open AccessArticle Adaptive Waveform Design for Cognitive Radar in Multiple Targets Situation
Entropy 2018, 20(2), 114; doi:10.3390/e20020114
Received: 18 December 2017 / Revised: 22 January 2018 / Accepted: 5 February 2018 / Published: 9 February 2018
PDF Full-text (6308 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the problem of cognitive radar (CR) waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target
[...] Read more.
In this paper, the problem of cognitive radar (CR) waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR). To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM) signal, waveforms designed based on maximum detection probability and maximum mutual information (MI) criteria can make radar echoes contain more multiple-target information and improve radar performance as a result. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Entropy Affects the Competition of Ordered Phases
Entropy 2018, 20(2), 115; doi:10.3390/e20020115
Received: 5 January 2018 / Revised: 2 February 2018 / Accepted: 8 February 2018 / Published: 10 February 2018
PDF Full-text (309 KB) | HTML Full-text | XML Full-text
Abstract
The effect of entropy at low noises is investigated in five-strategy logit-rule-driven spatial evolutionary potential games exhibiting two-fold or three-fold degenerate ground states. The non-zero elements of the payoff matrix define two subsystems which are equivalent to an Ising or a three-state Potts
[...] Read more.
The effect of entropy at low noises is investigated in five-strategy logit-rule-driven spatial evolutionary potential games exhibiting two-fold or three-fold degenerate ground states. The non-zero elements of the payoff matrix define two subsystems which are equivalent to an Ising or a three-state Potts model depending on whether the players are constrained to use only the first two or the last three strategies. Due to the equivalence of these models to spin systems, we can use the concepts and methods of statistical physics when studying the phase transitions. We argue that the greater entropy content of the Ising phase plays an important role in its stabilization when the magnitude of the Potts component is equal to or slightly greater than the strength of the Ising component. If the noise is increased in these systems, then the presence of the higher entropy state can cause a kind of social dilemma in which the players’ average income is reduced in the stable Ising phase following a first-order phase transition. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle Kullback–Leibler Divergence Based Distributed Cubature Kalman Filter and Its Application in Cooperative Space Object Tracking
Entropy 2018, 20(2), 116; doi:10.3390/e20020116
Received: 17 December 2017 / Revised: 23 January 2018 / Accepted: 8 February 2018 / Published: 10 February 2018
PDF Full-text (548 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a distributed Bayesian filter design was studied for nonlinear dynamics and measurement mapping based on Kullback–Leibler divergence. In a distributed structure, the nonlinear filter becomes a challenging problem, since each sensor cannot access the global measurement likelihood function over the
[...] Read more.
In this paper, a distributed Bayesian filter design was studied for nonlinear dynamics and measurement mapping based on Kullback–Leibler divergence. In a distributed structure, the nonlinear filter becomes a challenging problem, since each sensor cannot access the global measurement likelihood function over the whole network, and some sensors have weak observability of the state. To solve the problem in a sensor network, the distributed Bayesian filter problem was converted into an optimization problem by maximizing a posterior method. The global cost function over the whole network was decomposed into the sum of the local cost function, where the local cost function can be solved by each sensor. With the help of the Kullback–Leibler divergence, the global estimate was approximated in each sensor by communicating with its neighbors. Based on the proposed distributed Bayesian filter structure, a distributed cubature Kalman filter (DCKF) was proposed. Finally, a cooperative space object tracking problem was studied for illustration. The simulation results demonstrated that the proposed algorithm can solve the issues of varying communication topology and weak observability of some sensors. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle Bayesian Technique for the Selection of Probability Distributions for Frequency Analyses of Hydrometeorological Extremes
Entropy 2018, 20(2), 117; doi:10.3390/e20020117
Received: 13 November 2017 / Revised: 11 January 2018 / Accepted: 16 January 2018 / Published: 11 February 2018
Cited by 1 | PDF Full-text (1761 KB) | HTML Full-text | XML Full-text
Abstract
Frequency analysis of hydrometeorological extremes plays an important role in the design of hydraulic structures. A multitude of distributions have been employed for hydrological frequency analysis, and more than one distribution is often found to be adequate for frequency analysis. The current method
[...] Read more.
Frequency analysis of hydrometeorological extremes plays an important role in the design of hydraulic structures. A multitude of distributions have been employed for hydrological frequency analysis, and more than one distribution is often found to be adequate for frequency analysis. The current method for selecting the best fitted distributions are not so objective. Using different kinds of constraints, entropy theory was employed in this study to derive five generalized distributions for frequency analysis. These distributions are the generalized gamma (GG) distribution, generalized beta distribution of the second kind (GB2), Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B), and Halphen type inverse B (Hal-IB) distribution. The Bayesian technique was employed to objectively select the optimal distribution. The method of selection was tested using simulation as well as using extreme daily and hourly rainfall data from the Mississippi. The results showed that the Bayesian technique was able to select the best fitted distribution, thus providing a new way for model selection for frequency analysis of hydrometeorological extremes. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Performance of Segmented Thermoelectric Cooler Micro-Elements with Different Geometric Shapes and Temperature-Dependent Properties
Entropy 2018, 20(2), 118; doi:10.3390/e20020118
Received: 31 December 2017 / Revised: 7 February 2018 / Accepted: 8 February 2018 / Published: 11 February 2018
PDF Full-text (2222 KB) | HTML Full-text | XML Full-text
Abstract
In this work, the influences of the Thomson effect and the geometry of the p-type segmented leg on the performance of a segmented thermoelectric microcooler (STEMC) were examined. The effects of geometry and the material configuration of the p-type segmented leg on the
[...] Read more.
In this work, the influences of the Thomson effect and the geometry of the p-type segmented leg on the performance of a segmented thermoelectric microcooler (STEMC) were examined. The effects of geometry and the material configuration of the p-type segmented leg on the cooling power ( Q c ) and coefficient of performance ( C O P ) were investigated. The influence of the cross-sectional area ratio of the two joined segments on the device performance was also evaluated. We analyzed a one-dimensional p-type segmented leg model composed of two different semiconductor materials, B i 2 T e 3 and ( B i 0.5 S b 0.5 ) 2 T e 3 . Considering the three most common p-type leg geometries, we studied both single-material systems (using the same material for both segments) and segmented systems (using different materials for each segment). The C O P , Q c and temperature profile were evaluated for each of the modeled geometric configurations under a fixed temperature gradient of Δ T = 30 K. The performances of the STEMC were evaluated using two models, namely the constant-properties material (CPM) and temperature-dependent properties material (TDPM) models, considering the thermal conductivity ( κ ( T ) ), electrical conductivity ( σ ( T ) ) and Seebeck coefficient ( α ( T ) ). We considered the influence of the Thomson effect on C O P and Q c using the TDPM model. The results revealed the optimal material configurations for use in each segment of the p-type leg. According to the proposed geometric models, the optimal leg geometry and electrical current for maximum performance were determined. After consideration of the Thomson effect, the STEMC system was found to deliver a maximum cooling power that was 5.10 % higher than that of the single-material system. The results showed that the inverse system (where the material with a higher Seebeck coefficient is used for the first segment) delivered a higher performance than the direct system, with improvements in the C O P and Q c of 6.67 % and 29.25 % , respectively. Finally, analysis of the relationship between the areas of the STEMC segments demonstrated that increasing the cross-sectional area in the second segment led to improvements in the C O P and Q c of 16.67 % and 8.03 % , respectively. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Open AccessArticle Exergy Analysis of the Musculoskeletal System Efficiency during Aerobic and Anaerobic Activities
Entropy 2018, 20(2), 119; doi:10.3390/e20020119
Received: 19 December 2017 / Revised: 31 January 2018 / Accepted: 9 February 2018 / Published: 11 February 2018
Cited by 1 | PDF Full-text (926 KB) | HTML Full-text | XML Full-text
Abstract
The first and second laws of thermodynamics were applied to the human body in order to evaluate the quality of the energy conversion during muscle activity. Such an implementation represents an important issue in the exergy analysis of the body, because there is
[...] Read more.
The first and second laws of thermodynamics were applied to the human body in order to evaluate the quality of the energy conversion during muscle activity. Such an implementation represents an important issue in the exergy analysis of the body, because there is a difficulty in the literature in evaluating the performed power in some activities. Hence, to have the performed work as an input in the exergy model, two types of exercises were evaluated: weight lifting and aerobic exercise on a stationary bicycle. To this aim, we performed a study of the aerobic and anaerobic reactions in the muscle cells, aiming at predicting the metabolic efficiency and muscle efficiency during exercises. Physiological data such as oxygen consumption, carbon dioxide production, skin and internal temperatures and performed power were measured. Results indicated that the exergy efficiency was around 4% in the weight lifting, whereas it could reach values as high as 30% for aerobic exercises. It has been shown that the stationary bicycle is a more adequate test for first correlations between exergy and performance indices. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Performance Evaluations on Using Entropy of Ultrasound Log-Compressed Envelope Images for Hepatic Steatosis Assessment: An In Vivo Animal Study
Entropy 2018, 20(2), 120; doi:10.3390/e20020120
Received: 8 January 2018 / Revised: 5 February 2018 / Accepted: 9 February 2018 / Published: 11 February 2018
PDF Full-text (3297 KB) | HTML Full-text | XML Full-text
Abstract
Ultrasound B-mode imaging based on log-compressed envelope data has been widely applied to examine hepatic steatosis. Modeling raw backscattered signals returned from the liver parenchyma by using statistical distributions can provide additional information to assist in hepatic steatosis diagnosis. Since raw data are
[...] Read more.
Ultrasound B-mode imaging based on log-compressed envelope data has been widely applied to examine hepatic steatosis. Modeling raw backscattered signals returned from the liver parenchyma by using statistical distributions can provide additional information to assist in hepatic steatosis diagnosis. Since raw data are not always available in modern ultrasound systems, information entropy, which is a widely known nonmodel-based approach, may allow ultrasound backscattering analysis using B-scan for assessing hepatic steatosis. In this study, we explored the feasibility of using ultrasound entropy imaging constructed using log-compressed backscattered envelopes for assessing hepatic steatosis. Different stages of hepatic steatosis were induced in male Wistar rats fed with a methionine- and choline-deficient diet for 0 (i.e., normal control) and 1, 1.5, and 2 weeks (n = 48; 12 rats in each group). In vivo scanning of rat livers was performed using a commercial ultrasound machine (Model 3000, Terason, Burlington, MA, USA) equipped with a 7-MHz linear array transducer (Model 10L5, Terason) for ultrasound B-mode and entropy imaging based on uncompressed (HE image) and log-compressed envelopes (HB image), which were subsequently compared with histopathological examinations. Receiver operating characteristic (ROC) curve analysis and areas under the ROC curves (AUC) were used to assess diagnostic performance levels. The results showed that ultrasound entropy imaging can be used to assess hepatic steatosis. The AUCs obtained from HE imaging for diagnosing different steatosis stages were 0.93 (≥mild), 0.89 (≥moderate), and 0.89 (≥severe), respectively. HB imaging produced AUCs ranging from 0.74 (≥mild) to 0.84 (≥severe) as long as a higher number of bins was used to reconstruct the signal histogram for estimating entropy. The results indicated that entropy use enables ultrasound parametric imaging based on log-compressed envelope signals with great potential for diagnosing hepatic steatosis. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Stochastic Proximal Gradient Algorithms for Multi-Source Quantitative Photoacoustic Tomography
Entropy 2018, 20(2), 121; doi:10.3390/e20020121
Received: 13 December 2017 / Revised: 22 January 2018 / Accepted: 4 February 2018 / Published: 11 February 2018
PDF Full-text (821 KB) | HTML Full-text | XML Full-text
Abstract
The development of accurate and efficient image reconstruction algorithms is a central aspect of quantitative photoacoustic tomography (QPAT). In this paper, we address this issues for multi-source QPAT using the radiative transfer equation (RTE) as accurate model for light transport. The tissue parameters
[...] Read more.
The development of accurate and efficient image reconstruction algorithms is a central aspect of quantitative photoacoustic tomography (QPAT). In this paper, we address this issues for multi-source QPAT using the radiative transfer equation (RTE) as accurate model for light transport. The tissue parameters are jointly reconstructed from the acoustical data measured for each of the applied sources. We develop stochastic proximal gradient methods for multi-source QPAT, which are more efficient than standard proximal gradient methods in which a single iterative update has complexity proportional to the number applies sources. Additionally, we introduce a completely new formulation of QPAT as multilinear (MULL) inverse problem which avoids explicitly solving the RTE. The MULL formulation of QPAT is again addressed with stochastic proximal gradient methods. Numerical results for both approaches are presented. Besides the introduction of stochastic proximal gradient algorithms to QPAT, we consider the new MULL formulation of QPAT as main contribution of this paper. Full article
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
Figures

Figure 1

Open AccessFeature PaperArticle Attraction Controls the Entropy of Fluctuations in Isosceles Triangular Networks
Entropy 2018, 20(2), 122; doi:10.3390/e20020122
Received: 26 January 2018 / Revised: 8 February 2018 / Accepted: 10 February 2018 / Published: 12 February 2018
PDF Full-text (1257 KB) | HTML Full-text | XML Full-text
Abstract
We study two-dimensional triangular-network models, which have degenerate ground states composed of straight or randomly-zigzagging stripes and thus sub-extensive residual entropy. We show that attraction is responsible for the inversion of the stable phase by changing the entropy of fluctuations around the ground-state
[...] Read more.
We study two-dimensional triangular-network models, which have degenerate ground states composed of straight or randomly-zigzagging stripes and thus sub-extensive residual entropy. We show that attraction is responsible for the inversion of the stable phase by changing the entropy of fluctuations around the ground-state configurations. By using a real-space shell-expansion method, we compute the exact expression of the entropy for harmonic interactions, while for repulsive harmonic interactions we obtain the entropy arising from a limited subset of the system by numerical integration. We compare these results with a three-dimensional triangular-network model, which shows the same attraction-mediated selection mechanism of the stable phase, and conclude that this effect is general with respect to the dimensionality of the system. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessFeature PaperArticle Kinetic Energy of a Free Quantum Brownian Particle
Entropy 2018, 20(2), 123; doi:10.3390/e20020123
Received: 29 December 2017 / Revised: 21 January 2018 / Accepted: 9 February 2018 / Published: 12 February 2018
PDF Full-text (482 KB) | HTML Full-text | XML Full-text
Abstract
We consider a paradigmatic model of a quantum Brownian particle coupled to a thermostat consisting of harmonic oscillators. In the framework of a generalized Langevin equation, the memory (damping) kernel is assumed to be in the form of exponentially-decaying oscillations. We discuss a
[...] Read more.
We consider a paradigmatic model of a quantum Brownian particle coupled to a thermostat consisting of harmonic oscillators. In the framework of a generalized Langevin equation, the memory (damping) kernel is assumed to be in the form of exponentially-decaying oscillations. We discuss a quantum counterpart of the equipartition energy theorem for a free Brownian particle in a thermal equilibrium state. We conclude that the average kinetic energy of the Brownian particle is equal to thermally-averaged kinetic energy per one degree of freedom of oscillators of the environment, additionally averaged over all possible oscillators’ frequencies distributed according to some probability density in which details of the particle-environment interaction are present via the parameters of the damping kernel. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Adaptive Synchronization of Fractional-Order Complex-Valued Neural Networks with Discrete and Distributed Delays
Entropy 2018, 20(2), 124; doi:10.3390/e20020124
Received: 25 January 2018 / Revised: 10 February 2018 / Accepted: 11 February 2018 / Published: 13 February 2018
Cited by 1 | PDF Full-text (955 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the synchronization problem of fractional-order complex-valued neural networks with discrete and distributed delays is investigated. Based on the adaptive control and Lyapunov function theory, some sufficient conditions are derived to ensure the states of two fractional-order complex-valued neural networks with
[...] Read more.
In this paper, the synchronization problem of fractional-order complex-valued neural networks with discrete and distributed delays is investigated. Based on the adaptive control and Lyapunov function theory, some sufficient conditions are derived to ensure the states of two fractional-order complex-valued neural networks with discrete and distributed delays achieve complete synchronization rapidly. Finally, numerical simulations are given to illustrate the effectiveness and feasibility of the theoretical results. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Dissolution or Growth of a Liquid Drop via Phase-Field Ternary Mixture Model Based on the Non-Random, Two-Liquid Equation
Entropy 2018, 20(2), 125; doi:10.3390/e20020125
Received: 8 January 2018 / Revised: 6 February 2018 / Accepted: 11 February 2018 / Published: 14 February 2018
PDF Full-text (950 KB) | HTML Full-text | XML Full-text
Abstract
We simulate the diffusion-driven dissolution or growth of a single-component liquid drop embedded in a continuous phase of a binary liquid. Our theoretical approach follows a diffuse-interface model of partially miscible ternary liquid mixtures that incorporates the non-random, two-liquid (NRTL) equation as a
[...] Read more.
We simulate the diffusion-driven dissolution or growth of a single-component liquid drop embedded in a continuous phase of a binary liquid. Our theoretical approach follows a diffuse-interface model of partially miscible ternary liquid mixtures that incorporates the non-random, two-liquid (NRTL) equation as a submodel for the enthalpic (so-called excess) component of the Gibbs energy of mixing, while its nonlocal part is represented based on a square-gradient (Cahn-Hilliard-type modeling) assumption. The governing equations for this phase-field ternary mixture model are simulated in 2D, showing that, for a single-component drop embedded in a continuous phase of a binary liquid (which is highly miscible with either one component of the continuous phase but is essentially immiscible with the other), the size of the drop can either shrink to zero or reach a stationary value, depending on whether the global composition of the mixture is within the one-phase region or the unstable range of the phase diagram. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics of Interfaces)
Figures

Figure 1

Open AccessArticle Mesoscopic Moment Equations for Heat Conduction: Characteristic Features and Slow–Fast Mode Decomposition
Entropy 2018, 20(2), 126; doi:10.3390/e20020126
Received: 21 December 2017 / Revised: 30 January 2018 / Accepted: 12 February 2018 / Published: 15 February 2018
PDF Full-text (946 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we derive different systems of mesoscopic moment equations for the heat-conduction problem and analyze the basic features that they must hold. We discuss two- and three-equation systems, showing that the resulting mesoscopic equation from two-equation systems is of the telegraphist’s
[...] Read more.
In this work, we derive different systems of mesoscopic moment equations for the heat-conduction problem and analyze the basic features that they must hold. We discuss two- and three-equation systems, showing that the resulting mesoscopic equation from two-equation systems is of the telegraphist’s type and complies with the Cattaneo equation in the Extended Irreversible Thermodynamics Framework. The solution of the proposed systems is analyzed, and it is shown that it accounts for two modes: a slow diffusive mode, and a fast advective mode. This latter additional mode makes them suitable for heat transfer phenomena on fast time-scales, such as high-frequency pulses and heat transfer in small-scale devices. We finally show that, if proper initial conditions are provided, the advective mode disappears, and the solution of the system tends asymptotically to the transient solution of the classical parabolic heat-conduction equation. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Application of Multiscale Entropy in Assessing Plantar Skin Blood Flow Dynamics in Diabetics with Peripheral Neuropathy
Entropy 2018, 20(2), 127; doi:10.3390/e20020127
Received: 22 January 2018 / Revised: 10 February 2018 / Accepted: 12 February 2018 / Published: 15 February 2018
PDF Full-text (5452 KB) | HTML Full-text | XML Full-text
Abstract
Diabetic foot ulcer (DFU) is a common complication of diabetes mellitus, while tissue ischemia caused by impaired vasodilatory response to plantar pressure is thought to be a major factor of the development of DFUs, which has been assessed using various measures of skin
[...] Read more.
Diabetic foot ulcer (DFU) is a common complication of diabetes mellitus, while tissue ischemia caused by impaired vasodilatory response to plantar pressure is thought to be a major factor of the development of DFUs, which has been assessed using various measures of skin blood flow (SBF) in the time or frequency domain. These measures, however, are incapable of characterizing nonlinear dynamics of SBF, which is an indicator of pathologic alterations of microcirculation in the diabetic foot. This study recruited 18 type 2 diabetics with peripheral neuropathy and eight healthy controls. SBF at the first metatarsal head in response to locally applied pressure and heating was measured using laser Doppler flowmetry. A multiscale entropy algorithm was utilized to quantify the regularity degree of the SBF responses. The results showed that during reactive hyperemia and thermally induced biphasic response, the regularity degree of SBF in diabetics underwent only small changes compared to baseline and significantly differed from that in controls at multiple scales (p < 0.05). On the other hand, the transition of regularity degree of SBF in diabetics distinctively differed from that in controls (p < 0.05). These findings indicated that multiscale entropy could provide a more comprehensive assessment of impaired microvascular reactivity in the diabetic foot compared to other entropy measures based on only a single scale, which strengthens the use of plantar SBF dynamics to assess the risk for DFU. Full article
Figures

Figure 1

Open AccessArticle On Points Focusing Entropy
Entropy 2018, 20(2), 128; doi:10.3390/e20020128
Received: 22 January 2018 / Revised: 12 February 2018 / Accepted: 13 February 2018 / Published: 16 February 2018
PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
In the paper, we consider local aspects of the entropy of nonautonomous dynamical systems. For this purpose, we introduce the notion of a (asymptotical) focal entropy point. The notion of entropy appeared as a result of practical needs concerning thermodynamics and the problem
[...] Read more.
In the paper, we consider local aspects of the entropy of nonautonomous dynamical systems. For this purpose, we introduce the notion of a (asymptotical) focal entropy point. The notion of entropy appeared as a result of practical needs concerning thermodynamics and the problem of information flow, and it is connected with the complexity of a system. The definition adopted in the paper specifies the notions that express the complexity of a system around certain points (the complexity of the system is the same as its complexity around these points), and moreover, the complexity of a system around such points does not depend on the behavior of the system in other parts of its domain. Any periodic system “acting” in the closed unit interval has an asymptotical focal entropy point, which justifies wide interest in these issues. In the paper, we examine the problems of the distortions of a system and the approximation of an autonomous system by a nonautonomous one, in the context of having a (asymptotical) focal entropy point. It is shown that even a slight modification of a system may lead to the arising of the respective focal entropy points. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems)
Open AccessArticle Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras
Entropy 2018, 20(2), 129; doi:10.3390/e20020129
Received: 25 January 2018 / Revised: 6 February 2018 / Accepted: 9 February 2018 / Published: 16 February 2018
Cited by 1 | PDF Full-text (295 KB) | HTML Full-text | XML Full-text
Abstract
In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have
[...] Read more.
In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have been used to define the logical mutual information of experiments in the studied case. In addition, we define the logical cross entropy and logical divergence for the examined situation and prove basic properties of the suggested quantities. To illustrate the results, we provide several numerical examples. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Exergoeconomic Assessment of Solar Absorption and Absorption–Compression Hybrid Refrigeration in Building Cooling
Entropy 2018, 20(2), 130; doi:10.3390/e20020130
Received: 19 January 2018 / Revised: 12 February 2018 / Accepted: 14 February 2018 / Published: 17 February 2018
PDF Full-text (3212 KB) | HTML Full-text | XML Full-text
Abstract
The paper mainly deals with the match of solar refrigeration, i.e., solar/natural gas-driven absorption chiller (SNGDAC), solar vapor compression–absorption integrated refrigeration system with parallel configuration (SVCAIRSPC), and solar absorption-subcooled compression hybrid cooling system (SASCHCS), and building cooling based on the exergoeconomics. Three types
[...] Read more.
The paper mainly deals with the match of solar refrigeration, i.e., solar/natural gas-driven absorption chiller (SNGDAC), solar vapor compression–absorption integrated refrigeration system with parallel configuration (SVCAIRSPC), and solar absorption-subcooled compression hybrid cooling system (SASCHCS), and building cooling based on the exergoeconomics. Three types of building cooling are considered: Type 1 is the single-story building, type 2 includes the two-story and three-story buildings, and type 3 is the multi-story buildings. Besides this, two Chinese cities, Guangzhou and Turpan, are taken into account as well. The product cost flow rate is employed as the primary decision variable. The result exhibits that SNGDAC is considered as a suitable solution for type 1 buildings in Turpan, owing to its negligible natural gas consumption and lowest product cost flow rate. SVCAIRSPC is more applicable for type 2 buildings in Turpan because of its higher actual cooling capacity of absorption subsystem and lower fuel and product cost flow rate. Additionally, SASCHCS shows the most extensive cost-effectiveness, namely, its exergy destruction and product cost flow rate are both the lowest when used in all types of buildings in Guangzhou or type 3 buildings in Turpan. This paper is helpful to promote the application of solar cooling. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle On a Dynamical Approach to Some Prime Number Sequences
Entropy 2018, 20(2), 131; doi:10.3390/e20020131
Received: 8 December 2017 / Revised: 30 January 2018 / Accepted: 18 February 2018 / Published: 19 February 2018
PDF Full-text (2900 KB) | HTML Full-text | XML Full-text
Abstract
We show how the cross-disciplinary transfer of techniques from dynamical systems theory to number theory can be a fruitful avenue for research. We illustrate this idea by exploring from a nonlinear and symbolic dynamics viewpoint certain patterns emerging in some residue sequences generated
[...] Read more.
We show how the cross-disciplinary transfer of techniques from dynamical systems theory to number theory can be a fruitful avenue for research. We illustrate this idea by exploring from a nonlinear and symbolic dynamics viewpoint certain patterns emerging in some residue sequences generated from the prime number sequence. We show that the sequence formed by the residues of the primes modulo k are maximally chaotic and, while lacking forbidden patterns, unexpectedly display a non-trivial spectrum of Renyi entropies which suggest that every block of size m > 1 , while admissible, occurs with different probability. This non-uniform distribution of blocks for m > 1 contrasts Dirichlet’s theorem that guarantees equiprobability for m = 1 . We then explore in a similar fashion the sequence of prime gap residues. We numerically find that this sequence is again chaotic (positivity of Kolmogorov–Sinai entropy), however chaos is weaker as forbidden patterns emerge for every block of size m > 1 . We relate the onset of these forbidden patterns with the divisibility properties of integers, and estimate the densities of gap block residues via Hardy–Littlewood k-tuple conjecture. We use this estimation to argue that the amount of admissible blocks is non-uniformly distributed, what supports the fact that the spectrum of Renyi entropies is again non-trivial in this case. We complete our analysis by applying the chaos game to these symbolic sequences, and comparing the Iterated Function System (IFS) attractors found for the experimental sequences with appropriate null models. Full article
Figures

Figure 1

Open AccessArticle Uncertainty Relation Based on Wigner–Yanase–Dyson Skew Information with Quantum Memory
Entropy 2018, 20(2), 132; doi:10.3390/e20020132
Received: 2 January 2018 / Revised: 11 February 2018 / Accepted: 15 February 2018 / Published: 20 February 2018
PDF Full-text (426 KB) | HTML Full-text | XML Full-text
Abstract
We present uncertainty relations based on Wigner–Yanase–Dyson skew information with quantum memory. Uncertainty inequalities both in product and summation forms are derived. It is shown that the lower bounds contain two terms: one characterizes the degree of compatibility of two measurements, and the
[...] Read more.
We present uncertainty relations based on Wigner–Yanase–Dyson skew information with quantum memory. Uncertainty inequalities both in product and summation forms are derived. It is shown that the lower bounds contain two terms: one characterizes the degree of compatibility of two measurements, and the other is the quantum correlation between the measured system and the quantum memory. Detailed examples are given for product, separable and entangled states. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Figures

Figure 1

Open AccessArticle Applying Time-Dependent Attributes to Represent Demand in Road Mass Transit Systems
Entropy 2018, 20(2), 133; doi:10.3390/e20020133
Received: 24 January 2018 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (3944 KB) | HTML Full-text | XML Full-text
Abstract
The development of efficient mass transit systems that provide quality of service is a major challenge for modern societies. To meet this challenge, it is essential to understand user demand. This article proposes using new time-dependent attributes to represent demand, attributes that differ
[...] Read more.
The development of efficient mass transit systems that provide quality of service is a major challenge for modern societies. To meet this challenge, it is essential to understand user demand. This article proposes using new time-dependent attributes to represent demand, attributes that differ from those that have traditionally been used in the design and planning of this type of transit system. Data mining was used to obtain these new attributes; they were created using clustering techniques, and their quality evaluated with the Shannon entropy function and with neural networks. The methodology was implemented on an intercity public transport company and the results demonstrate that the attributes obtained offer a more precise understanding of demand and enable predictions to be made with acceptable precision. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Open AccessArticle Investigating the Configurations in Cross-Shareholding: A Joint Copula-Entropy Approach
Entropy 2018, 20(2), 134; doi:10.3390/e20020134
Received: 24 December 2017 / Revised: 16 February 2018 / Accepted: 17 February 2018 / Published: 20 February 2018
PDF Full-text (877 KB) | HTML Full-text | XML Full-text
Abstract
The complex nature of the interlacement of economic actors is quite evident at the level of the Stock market, where any company may actually interact with the other companies buying and selling their shares. In this respect, the companies populating a Stock market,
[...] Read more.
The complex nature of the interlacement of economic actors is quite evident at the level of the Stock market, where any company may actually interact with the other companies buying and selling their shares. In this respect, the companies populating a Stock market, along with their connections, can be effectively modeled through a directed network, where the nodes represent the companies, and the links indicate the ownership. This paper deals with this theme and discusses the concentration of a market. A cross-shareholding matrix is considered, along with two key factors: the node out-degree distribution which represents the diversification of investments in terms of the number of involved companies, and the node in-degree distribution which reports the integration of a company due to the sales of its own shares to other companies. While diversification is widely explored in the literature, integration is most present in literature on contagions. This paper captures such quantities of interest in the two frameworks and studies the stochastic dependence of diversification and integration through a copula approach. We adopt entropies as measures for assessing the concentration in the market. The main question is to assess the dependence structure leading to a better description of the data or to market polarization (minimal entropy) or market fairness (maximal entropy). In so doing, we derive information on the way in which the in- and out-degrees should be connected in order to shape the market. The question is of interest to regulators bodies, as witnessed by specific alert threshold published on the US mergers guidelines for limiting the possibility of acquisitions and the prevalence of a single company on the market. Indeed, all countries and the EU have also rules or guidelines in order to limit concentrations, in a country or across borders, respectively. The calibration of copulas and model parameters on the basis of real data serves as an illustrative application of the theoretical proposal. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Complexity of Simple, Switched and Skipped Chaotic Maps in Finite Precision
Entropy 2018, 20(2), 135; doi:10.3390/e20020135
Received: 29 December 2017 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (5247 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we investigate the degradation of the statistic properties of chaotic maps as consequence of their implementation in a digital media such as Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC). In these systems, binary
[...] Read more.
In this paper we investigate the degradation of the statistic properties of chaotic maps as consequence of their implementation in a digital media such as Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC). In these systems, binary floating- and fixed-point are the numerical representations available. Fixed-point representation is preferred over floating-point when speed, low power and/or small circuit area are necessary. Then, in this paper we compare the degradation of fixed-point binary precision version of chaotic maps with the one obtained by using floating point 754-IEEE standard, to evaluate the feasibility of their FPGA implementation. The specific period that every fixed-point precision produces was investigated in previous reports. Statistical characteristics are also relevant, it has been recently shown that it is convenient to describe the statistical characteristic using both, causal and non-causal quantifiers. In this paper we complement the period analysis by characterizing the behavior of these maps from an statistical point of view using cuantifiers from information theory. Here, rather than reproducing an exact replica of the real system, the aim is to meet certain conditions related to the statistics of systems. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Robustification of a One-Dimensional Generic Sigmoidal Chaotic Map with Application of True Random Bit Generation
Entropy 2018, 20(2), 136; doi:10.3390/e20020136
Received: 23 December 2017 / Revised: 7 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (6699 KB) | HTML Full-text | XML Full-text
Abstract
The search for generation approaches to robust chaos has received considerable attention due to potential applications in cryptography or secure communications. This paper is of interest regarding a 1-D sigmoidal chaotic map, which has never been distinctly investigated. This paper introduces a generic
[...] Read more.
The search for generation approaches to robust chaos has received considerable attention due to potential applications in cryptography or secure communications. This paper is of interest regarding a 1-D sigmoidal chaotic map, which has never been distinctly investigated. This paper introduces a generic form of the sigmoidal chaotic map with three terms, i.e., xn+1 = ∓AfNL(Bxn) ± Cxn ± D, where A, B, C, and D are real constants. The unification of modified sigmoid and hyperbolic tangent (tanh) functions reveals the existence of a “unified sigmoidal chaotic map” generically fulfilling the three terms, with robust chaos partially appearing in some parameter ranges. A simplified generic form, i.e., xn+1 = ∓fNL(Bxn) ± Cxn, through various S-shaped functions, has recently led to the possibility of linearization using (i) hardtanh and (ii) signum functions. This study finds a linearized sigmoidal chaotic map that potentially offers robust chaos over an entire range of parameters. Chaos dynamics are described in terms of chaotic waveforms, histogram, cobweb plots, fixed point, Jacobian, and a bifurcation structure diagram based on Lyapunov exponents. As a practical example, a true random bit generator using the linearized sigmoidal chaotic map is demonstrated. The resulting output is evaluated using the NIST SP800-22 test suite and TestU01. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessArticle Engine Load Effects on the Energy and Exergy Performance of a Medium Cycle/Organic Rankine Cycle for Exhaust Waste Heat Recovery
Entropy 2018, 20(2), 137; doi:10.3390/e20020137
Received: 10 December 2017 / Revised: 3 February 2018 / Accepted: 12 February 2018 / Published: 21 February 2018
PDF Full-text (4772 KB) | HTML Full-text | XML Full-text
Abstract
The Organic Rankine Cycle (ORC) has been proved a promising technique to exploit waste heat from Internal Combustion Engines (ICEs). Waste heat recovery systems have usually been designed based on engine rated working conditions, while engines often operate under part load conditions. Hence,
[...] Read more.
The Organic Rankine Cycle (ORC) has been proved a promising technique to exploit waste heat from Internal Combustion Engines (ICEs). Waste heat recovery systems have usually been designed based on engine rated working conditions, while engines often operate under part load conditions. Hence, it is quite important to analyze the off-design performance of ORC systems under different engine loads. This paper presents an off-design Medium Cycle/Organic Rankine Cycle (MC/ORC) system model by interconnecting the component models, which allows the prediction of system off-design behavior. The sliding pressure control method is applied to balance the variation of system parameters and evaporating pressure is chosen as the operational variable. The effect of operational variable and engine load on system performance is analyzed from the aspects of energy and exergy. The results show that with the drop of engine load, the MC/ORC system can always effectively recover waste heat, whereas the maximum net power output, thermal efficiency and exergy efficiency decrease linearly. Considering the contributions of components to total exergy destruction, the proportions of the gas-oil exchanger and turbine increase, while the proportions of the evaporator and condenser decrease with the drop of engine load. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Coarse-Graining Approaches in Univariate Multiscale Sample and Dispersion Entropy
Entropy 2018, 20(2), 138; doi:10.3390/e20020138
Received: 1 December 2017 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (3483 KB) | HTML Full-text | XML Full-text
Abstract
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those
[...] Read more.
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those scales with entropy estimators. Recent developments in the field have proposed modifications to this approach to facilitate the analysis of short-time series. However, the role of the downsampling in the classical coarse-graining process and its relationships with alternative filtering techniques has not been systematically explored yet. Here, we assess the impact of coarse-graining in multiscale entropy estimations based on both Sample Entropy and Dispersion Entropy. We compare the classical moving average approach with low-pass Butterworth filtering, both with and without downsampling, and empirical mode decomposition in Intrinsic Multiscale Entropy, in selected synthetic data and two real physiological datasets. The results show that when the sampling frequency is low or high, downsampling respectively decreases or increases the entropy values. Our results suggest that, when dealing with long signals and relatively low levels of noise, the refine composite method makes little difference in the quality of the entropy estimation at the expense of considerable additional computational cost. It is also found that downsampling within the coarse-graining procedure may not be required to quantify the complexity of signals, especially for short ones. Overall, we expect these results to contribute to the ongoing discussion about the development of stable, fast and robust-to-noise multiscale entropy techniques suited for either short or long recordings. Full article
Figures

Figure 1

Open AccessArticle Lagrangian Function on the Finite State Space Statistical Bundle
Entropy 2018, 20(2), 139; doi:10.3390/e20020139
Received: 26 December 2017 / Revised: 21 January 2018 / Accepted: 24 January 2018 / Published: 22 February 2018
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
The statistical bundle is the set of couples ( Q , W ) of a probability density Q and a random variable W such that Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Open AccessArticle A Chemo-Mechanical Model of Diffusion in Reactive Systems
Entropy 2018, 20(2), 140; doi:10.3390/e20020140
Received: 24 January 2018 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (31650 KB) | HTML Full-text | XML Full-text
Abstract
The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical
[...] Read more.
The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical Systems of this kind are anodes in batteries, reactive polymer blends and propellants. The physical processes which are assumed to contribute to the microstructural evolution are: (i) particle exchange and mechanical deformation; (ii) spinodal decomposition and phase coarsening; (iii) chemical reactions between the components; and (iv) energetic forces associated with the elastic field of the solid. To illustrate the capability of the deduced coupled field model, three-dimensional Non-Uniform Rational Basis Spline (NURBS) based finite element simulations of such multi-component structures are presented. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Finding a Hadamard Matrix by Simulated Quantum Annealing
Entropy 2018, 20(2), 141; doi:10.3390/e20020141
Received: 2 January 2018 / Revised: 6 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (773 KB) | HTML Full-text | XML Full-text
Abstract
Hard problems have recently become an important issue in computing. Various methods, including a heuristic approach that is inspired by physical phenomena, are being explored. In this paper, we propose the use of simulated quantum annealing (SQA) to find a Hadamard matrix, which
[...] Read more.
Hard problems have recently become an important issue in computing. Various methods, including a heuristic approach that is inspired by physical phenomena, are being explored. In this paper, we propose the use of simulated quantum annealing (SQA) to find a Hadamard matrix, which is itself a hard problem. We reformulate the problem as an energy minimization of spin vectors connected by a complete graph. The computation is conducted based on a path-integral Monte-Carlo (PIMC) SQA of the spin vector system, with an applied transverse magnetic field whose strength is decreased over time. In the numerical experiments, the proposed method is employed to find low-order Hadamard matrices, including the ones that cannot be constructed trivially by the Sylvester method. The scaling property of the method and the measurement of residual energy after a sufficiently large number of iterations show that SQA outperforms simulated annealing (SA) in solving this hard problem. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle A Simple and Adaptive Dispersion Regression Model for Count Data
Entropy 2018, 20(2), 142; doi:10.3390/e20020142
Received: 19 January 2018 / Revised: 14 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (409 KB) | HTML Full-text | XML Full-text
Abstract
Regression for count data is widely performed by models such as Poisson, negative binomial (NB) and zero-inflated regression. A challenge often faced by practitioners is the selection of the right model to take into account dispersion, which typically occurs in count datasets. It
[...] Read more.
Regression for count data is widely performed by models such as Poisson, negative binomial (NB) and zero-inflated regression. A challenge often faced by practitioners is the selection of the right model to take into account dispersion, which typically occurs in count datasets. It is highly desirable to have a unified model that can automatically adapt to the underlying dispersion and that can be easily implemented in practice. In this paper, a discrete Weibull regression model is shown to be able to adapt in a simple way to different types of dispersions relative to Poisson regression: overdispersion, underdispersion and covariate-specific dispersion. Maximum likelihood can be used for efficient parameter estimation. The description of the model, parameter inference and model diagnostics is accompanied by simulated and real data analyses. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle Stochastic Dynamics of a Time-Delayed Ecosystem Driven by Poisson White Noise Excitation
Entropy 2018, 20(2), 143; doi:10.3390/e20020143
Received: 16 December 2017 / Revised: 5 February 2018 / Accepted: 12 February 2018 / Published: 23 February 2018
PDF Full-text (2857 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson
[...] Read more.
We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate stationary probability density functions for both predator and prey populations. The influences of system parameters and the Poisson white noises are investigated in detail based on the approximate stationary probability density functions. It is found that, increasing time delay parameter as well as the mean arrival rate and the variance of the amplitude of the Poisson white noise will enhance the fluctuations of the prey and predator population. While the larger value of self-competition parameter will reduce the fluctuation of the system. Furthermore, the results from Monte Carlo simulation are also obtained to show the effectiveness of the results from averaging method. Full article
Figures

Figure 1

Open AccessArticle Group Sparse Precoding for Cloud-RAN with Multiple User Antennas
Entropy 2018, 20(2), 144; doi:10.3390/e20020144
Received: 6 November 2017 / Revised: 2 February 2018 / Accepted: 19 February 2018 / Published: 23 February 2018
PDF Full-text (950 KB) | HTML Full-text | XML Full-text
Abstract
Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU)
[...] Read more.
Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU) pool via high-speed low-latency fronthaul links, which enables efficient resource allocation and interference management. As the RAPs are geographically distributed, group sparse beamforming schemes attract extensive studies, where a subset of RAPs is assigned to be active and a high spectral efficiency can be achieved. However, most studies assume that each user is equipped with a single antenna. How to design the group sparse precoder for the multiple antenna users remains little understood, as it requires the joint optimization of the mutual coupling transmit and receive beamformers. This paper formulates an optimal joint RAP selection and precoding design problem in a C-RAN with multiple antennas at each user. Specifically, we assume a fixed transmit power constraint for each RAP, and investigate the optimal tradeoff between the sum rate and the number of active RAPs. Motivated by the compressive sensing theory, this paper formulates the group sparse precoding problem by inducing the 0 -norm as a penalty and then uses the reweighted 1 heuristic to find a solution. By adopting the idea of block diagonalization precoding, the problem can be formulated as a convex optimization, and an efficient algorithm is proposed based on its Lagrangian dual. Simulation results verify that our proposed algorithm can achieve almost the same sum rate as that obtained from an exhaustive search. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Information Thermodynamics Derives the Entropy Current of Cell Signal Transduction as a Model of a Binary Coding System
Entropy 2018, 20(2), 145; doi:10.3390/e20020145
Received: 12 January 2018 / Revised: 7 February 2018 / Accepted: 14 February 2018 / Published: 24 February 2018
Cited by 1 | PDF Full-text (556 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study
[...] Read more.
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study aims to evaluate the signal transduction step in cascades from the viewpoint of changes in mixing entropy. An increase in active forms may induce biological signal transduction through a mixing entropy change, which induces a chemical potential current in the signaling cascade. We applied the fluctuation theorem to calculate the chemical potential current and found that the average entropy production current is independent of the step in the whole cascade. As a result, the entropy current carrying signal transduction is defined by the entropy current mobility. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessFeature PaperArticle The Volume of Two-Qubit States by Information Geometry
Entropy 2018, 20(2), 146; doi:10.3390/e20020146
Received: 22 December 2017 / Revised: 20 February 2018 / Accepted: 22 February 2018 / Published: 24 February 2018
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
Using the information geometry approach, we determine the volume of the set of two-qubit states with maximally disordered subsystems. Particular attention is devoted to the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the
[...] Read more.
Using the information geometry approach, we determine the volume of the set of two-qubit states with maximally disordered subsystems. Particular attention is devoted to the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the usage of the classical Fisher metric on phase space probability representation of quantum states gives the same qualitative results with respect to different versions of the quantum Fisher metric. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Back to Top