Entropy doi: 10.3390/e19080426

Authors: Liang-Fang Ni Yi Wang Wei-Xia Li Pei-Zhen Wang Jia-Yan Zhang

An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive interference cancellation, stemmed from the a priori log-likelihood ratio (LLR) of encoded bits. Then, the signal originating from the symbols of the reliable segment, the symbol reliability metric, in terms of an a posteriori LLR of encoded bits which is larger than a certain threshold, is iteratively cancelled with the QRSFSIC in order to further obtain the residual signal for evaluating the symbols in the unreliable segment. This is done until the unreliable segment is empty, resulting in the extrinsic information for a RM turbo-coded bit with the greatest likelihood. Bridged by de-multiplexing and multiplexing, an iterative QRSFSIC detection is concatenated with an iterative trellis-based maximum a posteriori probability RM turbo decoder as if a principal Turbo detection and decoder is embedded with an iterative subordinate QRSFSIC detection and a RM turbo decoder, exchanging each other’s detection and decoding soft-decision information iteratively. These three stages let the proposed algorithm approach the upper bound of the diversity. The simulation results also show that the proposed scheme outperforms the other suboptimum detectors considered in this paper.

]]>Entropy doi: 10.3390/e19080425

Authors: Song Xu Yang Li Tingwen Huang Rosa Chan

Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying neural dynamics from multiple spike train data. First, the significant inputs are selected by using a group least absolute shrinkage and selection operator (LASSO) method, which can capture the sparsity within the neural system. Second, the multiwavelet-based basis function expansion scheme with an efficient forward orthogonal regression (FOR) algorithm aided by mutual information is utilized to rapidly capture the time-varying characteristics from the sparse model. Quantitative simulation results demonstrate that the proposed sMGLV model in this paper outperforms the initial full model and the state-of-the-art modeling methods in tracking performance for various time-varying kernels. Analyses of experimental data show that the proposed sMGLV model can capture the timing of transient changes accurately. The proposed framework will be useful to the study of how, when, and where information transmission processes across brain regions evolve in behavior.

]]>Entropy doi: 10.3390/e19080424

Authors: Jiarong Shi Xiuyun Zheng Wei Yang

Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption that the data matrices are contaminated stochastically by some type of noise. Thus the point estimations of low-rank components can be obtained by Maximum Likelihood (ML) estimation or Maximum a posteriori (MAP). In the past decade, a variety of probabilistic models of low-rank matrix factorizations have emerged. The most significant difference between low-rank matrix factorizations and their corresponding probabilistic models is that the latter treat the low-rank components as random variables. This paper makes a survey of the probabilistic models of low-rank matrix factorizations. Firstly, we review some probability distributions commonly-used in probabilistic models of low-rank matrix factorizations and introduce the conjugate priors of some probability distributions to simplify the Bayesian inference. Then we provide two main inference methods for probabilistic low-rank matrix factorizations, i.e., Gibbs sampling and variational Bayesian inference. Next, we classify roughly the important probabilistic models of low-rank matrix factorizations into several categories and review them respectively. The categories are performed via different matrix factorizations formulations, which mainly include PCA, matrix factorizations, robust PCA, NMF and tensor factorizations. Finally, we discuss the research issues needed to be studied in the future.

]]>Entropy doi: 10.3390/e19080423

Authors: Shengwei Huang Chengzhou Li Tianyu Tan Peng Fu Gang Xu Yongping Yang

In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas is not only used to preheat air for assisting coal combustion as usual but also to heat up feedwater and for low-pressure steam extraction. Air preheating is performed by both the exhaust flue gas in the boiler island and the low-pressure steam extraction in the turbine island; thereby part of the flue gas heat originally exchanged in the air preheater can be saved and introduced to heat the feedwater and the high-temperature condensed water. Consequently, part of the high-pressure steam is saved for further expansion in the steam turbine, which results in additional net power output. Based on the design data of a typical 1000 MW ultra-supercritical coal-fired power plant in China, an in-depth analysis of the energy-saving characteristics of the improved waste heat utilization system (WHUS) and the conventional WHUS is conducted. When the improved WHUS is adopted in a typical 1000 MW unit, net power output increases by 19.51 MW, exergy efficiency improves to 45.46%, and net annual revenue reaches USD 4.741 million while for the conventional WHUS, these performance parameters are 5.83 MW, 44.80% and USD 1.244 million, respectively. The research described in this paper provides a feasible energy-saving option for coal-fired power plants.

]]>Entropy doi: 10.3390/e19080419

Authors: Jiawen Deng Juan Jaramillo Peter Hänggi Jiangbin Gong

The well-known Jarzynski equality, often written in the form e − β Δ F = 〈 e − β W 〉 , provides a non-equilibrium means to measure the free energy difference Δ F of a system at the same inverse temperature β based on an ensemble average of non-equilibrium work W. The accuracy of Jarzynski’s measurement scheme was known to be determined by the variance of exponential work, denoted as var e − β W . However, it was recently found that var e − β W can systematically diverge in both classical and quantum cases. Such divergence will necessarily pose a challenge in the applications of Jarzynski equality because it may dramatically reduce the efficiency in determining Δ F . In this work, we present a deformed Jarzynski equality for both classical and quantum non-equilibrium statistics, in efforts to reuse experimental data that already suffers from a diverging var e − β W . The main feature of our deformed Jarzynski equality is that it connects free energies at different temperatures and it may still work efficiently subject to a diverging var e − β W . The conditions for applying our deformed Jarzynski equality may be met in experimental and computational situations. If so, then there is no need to redesign experimental or simulation methods. Furthermore, using the deformed Jarzynski equality, we exemplify the distinct behaviors of classical and quantum work fluctuations for the case of a time-dependent driven harmonic oscillator dynamics and provide insights into the essential performance differences between classical and quantum Jarzynski equalities.

]]>Entropy doi: 10.3390/e19080420

Authors: Junqing Zhang Trung Duong Roger Woods Alan Marshall

The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered.

]]>Entropy doi: 10.3390/e19080422

Authors: Brendon Brewer

The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple Gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario.

]]>Entropy doi: 10.3390/e19080421

Authors: Qing Li Steven Liang

The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and always corrupted by heavy background noise. In this paper, a new transient impulse extraction methodology is proposed based on impulse-step dictionary and re-weighted minimizing nonconvex penalty Lq regular (R-WMNPLq, q = 0.5) for the incipient fault diagnosis of rolling bearings. Prior to the sparse representation, the original vibration signal is preprocessed by the variational mode decomposition (VMD) technique. Due to the physical mechanism of periodic double impacts, including step-like and impulse-like impacts, an impulse-step impact dictionary atom could be designed to match the natural waveform structure of vibration signals. On the other hand, the traditional sparse reconstruction approaches such as orthogonal matching pursuit (OMP), L1-norm regularization treat all vibration signal values equally and thus ignore the fact that the vibration peak value may have more useful information about periodical transient impulses and should be preserved at a larger weight value. Therefore, penalty and smoothing parameters are introduced on the reconstructed model to guarantee the reasonable distribution consistence of peak vibration values. Lastly, the proposed technique is applied to accelerated lifetime testing of rolling bearings, where it achieves a more noticeable and higher diagnostic accuracy compared with OMP, L1-norm regularization and traditional spectral Kurtogram (SK) method.

]]>Entropy doi: 10.3390/e19080279

Authors: Arshad Khan Kashif Ali Abro Asifa Tassaddiq Ilyas Khan

This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F ( ∂ β / ∂ t β ) and Atangana–Baleanu (AB) A B ( ∂ α / ∂ t α ) fractional derivatives. For this purpose, second grade fluids flow with combined gradients of mass concentration and temperature distribution over a vertical flat plate is considered. The problem is first written in non-dimensional form and then based on AB and CF fractional derivatives, it is developed in fractional form, and then using the Laplace transform technique, exact solutions are established for both cases of AB and CF derivatives. They are then expressed in terms of newly defined M-function M q p ( z ) and generalized Hyper-geometric function p Ψ q ( z ) . The obtained exact solutions are plotted graphically for several pertinent parameters and an interesting comparison is made between AB and CF derivatives results with various similarities and differences.

]]>Entropy doi: 10.3390/e19080410

Authors: Sufen Wang Vijay Singh

The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced spatial and temporal variation of soil water. During a study, SWC was measured under maize and wheat for two years in northwest China. Statistical methods and entropy analysis were applied to investigate the spatio-temporal variability of SWC and the interaction between SWC and its influencing factors. The SWC variability changed within the field plot, with the standard deviation reaching a maximum value under intermediate mean SWC in different layers under various conditions (climatic conditions, soil conditions, crop type conditions). The spatial-temporal-distribution of the SWC reflects the variability of precipitation and potential evapotranspiration (ET0) under different crop covers. The mutual entropy values between SWC and precipitation were similar in two years under wheat cover but were different under maize cover. However, the mutual entropy values at different depths were different under different crop covers. The entropy values changed with SWC following an exponential trend. The informational correlation coefficient (R0) between the SWC and the precipitation was higher than that between SWC and other factors at different soil depths. Precipitation was the dominant factor controlling the SWC variability, and the crop efficient was the second dominant factor. This study highlights the precipitation is a paramount factor for investigating the spatio-temporal variability of soil water content in Northwest China.

]]>Entropy doi: 10.3390/e19080418

Authors: Stefano A. Lollai

In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs) and systems theories. A measure of entropy is proposed for QMSs, including a virtual document entropy and an entropy linked to processes and organisation. QMSs are also interpreted in light of Cybernetics, and interrelations between Information Theory and quality are also highlighted. A measure for the information content of quality documents is proposed. Such parameters can be used as adequacy indices for QMSs. From the discussed approach, suggestions for organising QMSs are also derived. Further interpretive thermodynamic-based criteria for QMSs are also proposed. The work represents the first attempt to treat quality organisational systems according to a thermodynamics-related approach. At this stage, no data are available to compare statements in the paper.

]]>Entropy doi: 10.3390/e19080416

Authors: Gokmen Demirkaya Ricardo Padilla Armando Fontalvo Maree Lake Yee Lim

This paper presents a theoretical investigation of a combined Power and Cooling Cycle that employs an Ammonia-Water mixture. The cycle combines a Rankine and an absorption refrigeration cycle. The Goswami cycle can be used in a wide range of applications including recovering waste heat as a bottoming cycle or generating power from non-conventional sources like solar radiation or geothermal energy. A thermodynamic study of power and cooling co-generation is presented for heat source temperatures between 100 to 350 °C. A comprehensive analysis of the effect of several operation and configuration parameters, including the number of turbine stages and different superheating configurations, on the power output and the thermal and exergy efficiencies was conducted. Results showed the Goswami cycle can operate at an effective exergy efficiency of 60–80% with thermal efficiencies between 25 to 31%. The investigation also showed that multiple stage turbines had a better performance than single stage turbines when heat source temperatures remain above 200 °C in terms of power, thermal and exergy efficiencies. However, the effect of turbine stages is almost the same when heat source temperatures were below 175 °C. For multiple turbine stages, the use of partial superheating with Single or Double Reheat stream showed a better performance in terms of efficiency. It also showed an increase in exergy destruction when heat source temperature was increased.

]]>Entropy doi: 10.3390/e19080417

Authors: Arthur Sousa Hideki Takayasu Didier Sornette Misako Takayasu

We introduce a simple growth model in which the sizes of entities evolve as multiplicative random processes that start at different times. A novel aspect we examine is the dependence among entities. For this, we consider three classes of dependence between growth factors governing the evolution of sizes: independence, Kesten dependence and mixed dependence. We take the sum X of the sizes of the entities as the representative quantity of the system, which has the structure of a sum of product terms (Sigma-Pi), whose asymptotic distribution function has a power-law tail behavior. We present evidence that the dependence type does not alter the asymptotic power-law tail behavior, nor the value of the tail exponent. However, the structure of the large values of the sum X is found to vary with the dependence between the growth factors (and thus the entities). In particular, for the independence case, we find that the large values of X are contributed by a single maximum size entity: the asymptotic power-law tail is the result of such single contribution to the sum, with this maximum contributing entity changing stochastically with time and with realizations.

]]>Entropy doi: 10.3390/e19080413

Authors: Adrian Arellano-Delgado Rosa López-Gutiérrez Miguel Murillo-Escobar Liliana Cardoza-Avendaño César Cruz-Hernández

In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among others. Nevertheless, the emergence of hyperchaos in this kind of real-life network has not been proven. In particular, we focus this study on the emergence of hyperchaotic dynamics, considering that these can be mainly used in engineering applications such as cryptography, secure communications, biometric systems, telemedicine, among others. In order to corroborate that the emerging dynamics are hyperchaotic, some chaos and hyperchaos verification tests are conducted. In addition, the presented hyperchaotic coupled system synchronizes, based on the proposed coupling scheme.

]]>Entropy doi: 10.3390/e19080415

Authors: Arkadiusz Jędrzejewski Katarzyna Sznajd-Weron

We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person approach with the quenched disorder and the situation approach with the annealed disorder, and investigate how these two approaches influence order–disorder phase transitions observed in the q-voter model with noise. We show that under a quenched disorder, differences between models with independence and anticonformity are weaker and only quantitative. In contrast, annealing has a much more profound impact on the system and leads to qualitative differences between models on a macroscopic level. Furthermore, only under an annealed disorder may the discontinuous phase transitions appear. It seems that freezing the agents’ behavior at the beginning of simulation—introducing quenched disorder—supports second-order phase transitions, whereas allowing agents to reverse their attitude in time—incorporating annealed disorder—supports discontinuous ones. We show that anticonformity is insensitive to the type of disorder, and in all cases it gives the same result. We precede our study with a short insight from statistical physics into annealed vs. quenched disorder and a brief review of these two approaches in models of opinion dynamics.

]]>Entropy doi: 10.3390/e19080414

Authors: Mohammad Mehdi Rashidi Munawwar Ali Abbas

This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum boundary layers. For this purpose, we have considered a model for effective Prandtl number which is borrowed by means of experimental analysis on a nano boundary layer, steady, two-dimensional incompressible flow through a stretching sheet. We have considered γAl2O3-H2O and Al2O3-C2H6O2 nanoparticles for the governing flow problem. An entropy generation analysis is also presented with the help of the second law of thermodynamics. A numerical technique known as Successive Taylor Series Linearization Method (STSLM) is used to solve the obtained governing nonlinear boundary layer equations. The numerical and graphical results are discussed for two cases i.e., (i) effective Prandtl number and (ii) without effective Prandtl number. From graphical results, it is observed that the velocity profile and temperature profile increases in the absence of effective Prandtl number while both expressions become larger in the presence of Prandtl number. Further, numerical comparison has been presented with previously published results to validate the current methodology and results.

]]>Entropy doi: 10.3390/e19080411

Authors: Vihan Patel Charles Lineweaver

The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem is proposed. Following Penrose, we argue that the evenly distributed matter of the early universe is equivalent to low gravitational entropy. There are two competing explanations for how this initial low gravitational entropy comes about. (1) Inflation and baryogenesis produce a virtually homogeneous distribution of matter with a low gravitational entropy. (2) Dissatisfied with explaining a low gravitational entropy as the product of a ‘special’ scalar field, some theorists argue (following Boltzmann) for a “more natural” initial condition in which the entire universe is in an initial equilibrium state of maximum entropy. In this equilibrium model, our observable universe is an unusual low entropy fluctuation embedded in a high entropy universe. The anthropic principle and the fluctuation theorem suggest that this low entropy region should be as small as possible and have as large an entropy as possible, consistent with our existence. However, our low entropy universe is much larger than needed to produce observers, and we see no evidence for an embedding in a higher entropy background. The initial conditions of inflationary models are as natural as the equilibrium background favored by many theorists.

]]>Entropy doi: 10.3390/e19080412

Authors: Eduard Gabriel Ceptureanu Sebastian Ion Ceptureanu Doina Popescu

This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate entrepreneurship as an instrument to fulfil their objectives. Our study contributes to the limited amount of empirical research on entropy in an organization setting by highlighting the boundary conditions of the impact by examining the moderating effect of firms’ organizational capabilities and also to the development of Econophysics as a fast growing area of interdisciplinary sciences.

]]>Entropy doi: 10.3390/e19080409

Authors: Alex Dytso Ronit Bustin H. Poor Shlomo Shamai (Shitz)

Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE). This paper reviews several applications of the I-MMSE relationship to information theoretic problems arising in connection with multi-user channel coding. The goal of this paper is to review the different techniques used on such problems, as well as to emphasize the added-value obtained from the information-estimation point of view.

]]>Entropy doi: 10.3390/e19080408

Authors: Luca Faes Daniele Marinazzo Sebastiano Stramaglia

Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone.

]]>Entropy doi: 10.3390/e19080406

Authors: Xinghua Fang Mingshun Song Yizeng Chen

In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. This article proposes a simplified method based on maximum entropy for the control chart design when the quantity being monitored is unimodal distribution. First, we use the maximum entropy distribution to approximate the unknown distribution of the monitored quantity. Then we directly take the value of the quantity as the monitoring statistic. Finally, the Lebesgue measure is applied to estimate the acceptance regions and the one with minimum volume is chosen as the optimal in-control region of the monitored quantity. The results from two cases show that the proposed method in this article has a higher detection capability than the conventional control chart techniques when the monitored quantity is asymmetric unimodal distribution.

]]>Entropy doi: 10.3390/e19080407

Authors: José Weberszpil Wen Chen

In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also the α -total differentiation with conformable derivatives. Some results in the literature are re-obtained, such as the physical temperature defined by Sumiyoshi Abe.

]]>Entropy doi: 10.3390/e19080405

Authors: Yao Rong Mengjiao Tang Jie Zhou

One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the framework of Riemannian geometry and dual geometry, we revisit two commonly-used intrinsic losses which are respectively given by the squared Rao distance and the symmetrized Kullback–Leibler divergence (or Jeffreys divergence). For an exponential family endowed with the Fisher metric and α -connections, the two loss functions are uniformly described as the energy difference along an α -geodesic path, for some α ∈ { − 1 , 0 , 1 } . Subsequently, the two intrinsic losses are utilized to develop Bayesian analyses of covariance matrix estimation and range-spread target detection. We provide an intrinsically unbiased covariance estimator, which is verified to be asymptotically efficient in terms of the intrinsic mean square error. The decision rules deduced by the intrinsic Bayesian criterion provide a geometrical justification for the constant false alarm rate detector based on generalized likelihood ratio principle.

]]>Entropy doi: 10.3390/e19080404

Authors: Florian Ries Johannes Janicka Amsini Sadiki

In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy of heat fluxes and temperature scales are examined. Secondly, the entropy production rates during thermofluid processes evolving in the supercritical flow are investigated in order to identify the causes of irreversibilities and to display advantageous locations of handling along with the process regimes favorable to mixing. Thereby, it turned out that (1) the jet disintegration process consists of four main stages under supercritical conditions (potential core, separation, pseudo-boiling, turbulent mixing), (2) causes of irreversibilities are primarily due to heat transport and thermodynamic effects rather than turbulence dynamics and (3) heat fluxes and temperature scales appear anisotropic even at the smallest scales, which implies that anisotropic thermal diffusivity models might be appropriate in the context of both Reynolds-averaged Navier–Stokes (RANS) and large eddy simulation (LES) approaches while numerically modeling supercritical fluid flows.

]]>Entropy doi: 10.3390/e19080403

Authors: Mikhail Prokopenko

n/a

]]>Entropy doi: 10.3390/e19080402

Authors: Reimar Leike Torsten Enßlin

In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how “embarrassing” it is to communicate a given approximation. We reproduce and discuss an old proof showing that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. The loss function that is obtained in the derivation is equal to the Kullback-Leibler divergence when normalized. This loss function is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments—the approximated and non-approximated beliefs—should be used. The correct order ensures that the recipient of a communication is only deprived of the minimal amount of information. We hope that the elementary derivation settles the apparent confusion. For example when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many suggested computational schemes.

]]>Entropy doi: 10.3390/e19080401

Authors: Andres San-Millan Daniel Feliu-Talegón Vicente Feliu-Batlle Raul Rivas-Perez

In this paper a two-input, two-output (TITO) fractional order mathematical model of a laboratory prototype of a hydraulic canal is proposed. This canal is made up of two pools that have a strong interaction between them. The inputs of the TITO model are the pump flow and the opening of an intermediate gate, and the two outputs are the water levels in the two pools. Based on the experiments developed in a laboratory prototype the parameters of the mathematical models have been identified. Then, considering the TITO model, a first control loop of the pump is closed to reproduce real-world conditions in which the water level of the first pool is not dependent on the opening of the upstream gate, thus leading to an equivalent single input, single output (SISO) system. The comparison of the resulting system with the classical first order systems typically utilized to model hydraulic canals shows that the proposed model has significantly lower error: about 50%, and, therefore, higher accuracy in capturing the canal dynamics. This model has also been utilized to optimize the design of the controller of the pump of the canal, thus achieving a faster response to step commands and thus minimizing the interaction between the two pools of the experimental platform.

]]>Entropy doi: 10.3390/e19080397

Authors: Ji She Jianjiang Zhou Fei Wang Hailin Li

This paper presents a novel low probability of intercept (LPI) optimization framework in radar network by minimizing the Schleher intercept factor based on minimum mean-square error (MMSE) estimation. MMSE of the estimate of the target scatterer matrix is presented as a metric for the ability to estimate the target scattering characteristic. The LPI optimization problem, which is developed on the basis of a predetermined MMSE threshold, has two variables, including transmitted power and target assignment index. We separated power allocation from target assignment through two sub-problems. First, the optimum power allocation is obtained for each target assignment scheme. Second, target assignment schemes are selected based on the results of power allocation. The main problem of this paper can be considered in the point of views based on two cases, including single radar assigned to each target and two radars assigned to each target. According to simulation results, the proposed algorithm can effectively reduce the total Schleher intercept factor of a radar network, which can make a great contribution to improve the LPI performance of a radar network.

]]>Entropy doi: 10.3390/e19080399

Authors: Ariadne Costa Ludmila Brochini Osame Kinouchi

Networks of stochastic spiking neurons are interesting models in the area of theoretical neuroscience, presenting both continuous and discontinuous phase transitions. Here, we study fully-connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality (SOSC)) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and Dragon-king avalanches. We also find that neuronal gains can produce collective oscillations that coexist with neuronal avalanches.

]]>Entropy doi: 10.3390/e19080398

Authors: Silvio Bonometto Roberto Mainini

Strongly-Coupled Dark Energy plus Warm dark matter (SCDEW) cosmologies admit the stationary presence of ∼1% of coupled-DM and DE, since inflationary reheating. Coupled-DM fluctuations therefore grow up to non-linearity even in the early radiative expansion. Such early non-linear stages are modelized here through the evolution of a top-hat density enhancement, reaching an early virial balance when the coupled-DM density contrast is just 25–26, and the DM density enhancement is ∼10 % of the total density. During the time needed to settle in virial equilibrium, the virial balance conditions, however, continue to modify, so that “virialized” lumps undergo a complete evaporation. Here, we outline that DM particles processed by overdensities preserve a fraction of their virial momentum. Although fully non-relativistic, the resulting velocities (moderately) affect the fluctuation dynamics over greater scales, entering the horizon later on.

]]>Entropy doi: 10.3390/e19080400

Authors: Giovanni Amelino-Camelia

Over the last decade, it has been found that nonlinear laws of composition of momenta are predicted by some alternative approaches to “real” 4D quantum gravity, and by all formulations of dimensionally-reduced (3D) quantum gravity coupled to matter. The possible relevance for rather different quantum-gravity models has motivated several studies, but this interest is being tempered by concerns that a nonlinear law of addition of momenta might inevitably produce a pathological description of the total momentum of a macroscopic body. I here show that such concerns are unjustified, finding that they are rooted in failure to appreciate the differences between two roles for laws composition of momentum in physics. Previous results relied exclusively on the role of a law of momentum composition in the description of spacetime locality. However, the notion of total momentum of a multi-particle system is not a manifestation of locality, but rather reflects translational invariance. By working within an illustrative example of quantum spacetime, I show explicitly that spacetime locality is indeed reflected in a nonlinear law of composition of momenta, but translational invariance still results in an undeformed linear law of addition of momenta building up the total momentum of a multi-particle system.

]]>Entropy doi: 10.3390/e19080395

Authors: Anastasios Tsourtis Vagelis Harmandaris Dimitrios Tsagkarogiannis

We present a systematic coarse-graining (CG) strategy for many particle molecular systems based on cluster expansion techniques. We construct a hierarchy of coarse-grained Hamiltonians with interaction potentials consisting of two, three and higher body interactions. In this way, the suggested model becomes computationally tractable, since no information from long n-body (bulk) simulations is required in order to develop it, while retaining the fluctuations at the coarse-grained level. The accuracy of the derived cluster expansion based on interatomic potentials is examined over a range of various temperatures and densities and compared to direct computation of the pair potential of mean force. The comparison of the coarse-grained simulations is done on the basis of the structural properties, against detailed all-atom data. On the other hand, by construction, the approximate coarse-grained models retain, in principle, the thermodynamic properties of the atomistic model without the need for any further parameter fitting. We give specific examples for methane and ethane molecules in which the coarse-grained variable is the centre of mass of the molecule. We investigate different temperature (T) and density ( ρ ) regimes, and we examine differences between the methane and ethane systems. Results show that the cluster expansion formalism can be used in order to provide accurate effective pair and three-body CG potentials at high T and low ρ regimes. In the liquid regime, the three-body effective CG potentials give a small improvement over the typical pair CG ones; however, in order to get significantly better results, one needs to consider even higher order terms.

]]>Entropy doi: 10.3390/e19080396

Authors: Hongliang Zhao Leihua Yao Gang Mei Tianyu Liu Yuansong Ning

Landslides are a common type of natural disaster in mountainous areas. As a result of the comprehensive influences of geology, geomorphology and climatic conditions, the susceptibility to landslide hazards in mountainous areas shows obvious regionalism. The evaluation of regional landslide susceptibility can help reduce the risk to the lives of mountain residents. In this paper, the Shannon entropy theory, a fuzzy comprehensive method and an analytic hierarchy process (AHP) have been used to demonstrate a variable type of weighting for landslide susceptibility evaluation modeling, combining subjective and objective weights. Further, based on a single factor sensitivity analysis, we established a strict criterion for landslide susceptibility assessments. Eight influencing factors have been selected for the study of Zhen’an County, Shan’xi Province: the lithology, relief amplitude, slope, aspect, slope morphology, altitude, annual mean rainfall and distance to the river. In order to verify the advantages of the proposed method, the landslide index, prediction accuracy P, the R-index and the area under the curve were used in this paper. The results show that the proposed model of landslide hazard susceptibility can help to produce more objective and accurate landslide susceptibility maps, which not only take advantage of the information from the original data, but also reflect an expert’s knowledge and the opinions of decision-makers.

]]>Entropy doi: 10.3390/e19080376

Authors: Ameneh Arjmandzadeh Majid Yarahmadi

In this paper, a new method for controlling a quantum ensemble that its members have uncertainties in Hamiltonian parameters is designed. Based on combining the sampling-based learning control (SLC) and a new quantum genetic algorithm (QGA) method, the control of an ensemble of a two-level quantum system with Hamiltonian uncertainties is achieved. To simultaneously transfer the ensemble members to a desired state, an SLC algorithm is designed. For reducing the transfer error significantly, an optimization problem is defined. Considering the advantages of QGA and the nature of the problem, the optimization problem by using the QGA method is solved. For this purpose, N samples through sampling of the uncertainty parameters via uniform distribution are generated and an augmented system is also created. By using QGA in the training step, the best control signal is obtained. To test the performance and validation of the method, the obtained control is implemented for some random selected samples. A couple of examples are simulated for investigating the proposed model. The results of the simulations indicate the effectiveness and the advantages of the proposed method.

]]>Entropy doi: 10.3390/e19080394

Authors: Per Lundqvist Henrik Öhman

Analysis of global energy efficiency of thermal systems is of practical importance for a number of reasons. Cycles and processes used in thermal systems exist in very different configurations, making comparison difficult if specific models are required to analyze specific thermal systems. Thermal systems with small temperature differences between a hot side and a cold side also suffer from difficulties due to heat transfer pinch point effects. Such pinch points are consequences of thermal systems design and must therefore be integrated in the global evaluation. In optimizing thermal systems, detailed entropy generation analysis is suitable to identify performance losses caused by cycle components. In plant analysis, a similar logic applies with the difference that the thermal system is then only a component, often industrially standardized. This article presents how a thermodynamic “black box” method for defining and comparing thermal efficiency of different size and types of heat engines can be extended to also compare heat pumps of different apparent magnitude and type. Impact of a non-linear boundary condition on reversible thermal efficiency is exemplified and a correlation of average real heat engine efficiencies is discussed in the light of linear and non-linear boundary conditions.

]]>Entropy doi: 10.3390/e19080390

Authors: Antonio M. Lopes Jose Tenreiro Machado

Geophysical time series have a complex nature that poses challenges to reaching assertive conclusions, and require advanced mathematical and computational tools to unravel embedded information. In this paper, time–frequency methods and hierarchical clustering (HC) techniques are combined for processing and visualizing tidal information. In a first phase, the raw data are pre-processed for estimating missing values and obtaining dimensionless reliable time series. In a second phase, the Jensen–Shannon divergence is adopted for measuring dissimilarities between data collected at several stations. The signals are compared in the frequency and time–frequency domains, and the HC is applied to visualize hidden relationships. In a third phase, the long-range behavior of tides is studied by means of power law functions. Numerical examples demonstrate the effectiveness of the approach when dealing with a large volume of real-world data.

]]>Entropy doi: 10.3390/e19080392

Authors: Vincenzo F. Cardone Ninfa Radicella Antonio Troisi

We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO) standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios.

]]>Entropy doi: 10.3390/e19080393

Authors: Bang Chul Jung Su Min Kim Won-Yong Shin Hyun Jong Yang

We introduce a distributed protocol to achieve multiuser diversity in a multicell multiple-input multiple-output (MIMO) uplink network, referred to as a MIMO interfering multiple-access channel (IMAC). Assuming both no information exchange among base stations (BS) and local channel state information at the transmitters for the MIMO IMAC, we propose a joint beamforming and user scheduling protocol, and then show that the proposed protocol can achieve the optimal multiuser diversity gain, i.e., KMlog(SNRlog N), as long as the number of mobile stations (MSs) in a cell, N, scales faster than SNR K M − L 1 − ϵ for a small constant ϵ &gt; 0, where M, L, K, and SNR denote the number of receive antennas at each BS, the number of transmit antennas at each MS, the number of cells, and the signal-to-noise ratio, respectively. Our result indicates that multiuser diversity can be achieved in the presence of intra-cell and inter-cell interference even in a distributed fashion. As a result, vital information on how to design distributed algorithms in interference-limited cellular environments is provided.

]]>Entropy doi: 10.3390/e19080391

Authors: Alexander Terenin David Draper

In a given problem, the Bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information about the unknowns of main interest external to the data. In cases where little such information is available, the problem under study may possess an invariance under a transformation group that encodes a lack of information, leading to a unique prior—this idea was explored at length by E.T. Jaynes. Previous successful examples have included location-scale invariance under linear transformation, multiplicative invariance of the rate at which events in a counting process are observed, and the derivation of the Haldane prior for a Bernoulli success probability. In this paper we show that this method can be extended, by generalizing Jaynes, in two ways: (1) to yield families of approximately invariant priors; and (2) to the infinite-dimensional setting, yielding families of priors on spaces of distribution functions. Our results can be used to describe conditions under which a particular Dirichlet Process posterior arises from an optimal Bayesian analysis, in the sense that invariances in the prior and likelihood lead to one and only one posterior distribution.

]]>Entropy doi: 10.3390/e19080388

Authors: Sangyeol Lee Minjo Kim

This study considers the goodness of fit test for a class of conditionally heteroscedastic location-scale time series models. For this task, we develop an entropy-type goodness of fit test based on residuals. To examine the asymptotic behavior of the test, we first investigate the asymptotic property of the residual empirical process and then derive the limiting null distribution of the entropy test.

]]>Entropy doi: 10.3390/e19080389

Authors: Jun Chen Da-ke He Ashish Jagmohan Luis Lastras-Montaño

The reliability function of variable-rate Slepian-Wolf coding is linked to the reliability function of channel coding with constant composition codes, through which computable lower and upper bounds are derived. The bounds coincide at rates close to the Slepian-Wolf limit, yielding a complete characterization of the reliability function in that rate region. It is shown that variable-rate Slepian-Wolf codes can significantly outperform fixed-rate Slepian-Wolf codes in terms of rate-error tradeoff. Variable-rate Slepian-Wolf coding with rate below the Slepian-Wolf limit is also analyzed. In sharp contrast with fixed-rate Slepian-Wolf codes for which the correct decoding probability decays to zero exponentially fast if the rate is below the Slepian-Wolf limit, the correct decoding probability of variable-rate Slepian-Wolf codes can be bounded away from zero.

]]>Entropy doi: 10.3390/e19080386

Authors: Iosif Gershman Eugeniy Gershman Alexander Mironov German Fox-Rabinovich Stephen Veldhuis

The study deals with arc resistance of composite Cu–Cr system materials of various compositions. The microstructure of materials exposed to an electric arc was investigated. Despite varying initial chromium contents, the same structure was formed in the arc exposure zones of all the tested materials.

]]>Entropy doi: 10.3390/e19080385

Authors: Jianfeng Hu Ping Wang

Driver fatigue is an important factor in traffic accidents, and the development of a detection system for driver fatigue is of great significance. To estimate and prevent driver fatigue, various classifiers based on electroencephalogram (EEG) signals have been developed; however, as EEG signals have inherent non-stationary characteristics, their detection performance is often deteriorated by background noise. To investigate the effects of noise on detection performance, simulated Gaussian noise, spike noise, and electromyogram (EMG) noise were added into a raw EEG signal. Four types of entropies, including sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for feature sets. Three base classifiers (K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Decision Tree (DT)) and two ensemble methods (Bootstrap Aggregating (Bagging) and Boosting) were employed and compared. Results showed that: (1) the simulated Gaussian noise and EMG noise had an impact on accuracy, while simulated spike noise did not, which is of great significance for the future application of driver fatigue detection; (2) the influence on noise performance was different based on each classifier, for example, the robust effect of classifier DT was the best and classifier SVM was the weakest; (3) the influence on noise performance was also different with each feature set where the robustness of feature set FE and the combined feature set were the best; and (4) while the Bagging method could not significantly improve performance against noise addition, the Boosting method may significantly improve performance against superimposed Gaussian and EMG noise. The entropy feature extraction method could not only identify driver fatigue, but also effectively resist noise, which is of great significance in future applications of an EEG-based driver fatigue detection system.

]]>Entropy doi: 10.3390/e19080387

Authors: Vygintas Gontis Aleksejus Kononovicius

The origin of the long-range memory in non-equilibrium systems is still an open problem as the phenomenon can be reproduced using models based on Markov processes. In these cases, the notion of spurious memory is introduced. A good example of Markov processes with spurious memory is a stochastic process driven by a non-linear stochastic differential equation (SDE). This example is at odds with models built using fractional Brownian motion (fBm). We analyze the differences between these two cases seeking to establish possible empirical tests of the origin of the observed long-range memory. We investigate probability density functions (PDFs) of burst and inter-burst duration in numerically-obtained time series and compare with the results of fBm. Our analysis confirms that the characteristic feature of the processes described by a one-dimensional SDE is the power-law exponent 3 / 2 of the burst or inter-burst duration PDF. This property of stochastic processes might be used to detect spurious memory in various non-equilibrium systems, where observed macroscopic behavior can be derived from the imitative interactions of agents.

]]>Entropy doi: 10.3390/e19080384

Authors: Ying Liu Liang Li George Alexandropoulos Marius Pesavento

We apply the concept of artificial and controlled interference in a two-hop relay network with an untrusted relay, aiming at enhancing the wireless communication secrecy between the source and the destination node. In order to shield the square quadrature amplitude-modulated (QAM) signals transmitted from the source node to the relay, the destination node designs and transmits artificial noise (AN) symbols to jam the relay reception. The objective of our considered AN design is to degrade the error probability performance at the untrusted relay, for different types of channel state information (CSI) at the destination. By considering perfect knowledge of the instantaneous CSI of the source-to-relay and relay-to-destination links, we first present an analytical expression for the symbol error rate (SER) performance at the relay. Based on the assumption of an average power constraint at the destination node, we then derive the optimal phase and power distribution of the AN that maximizes the SER at the relay. Furthermore, we obtain the optimal AN design for the case where only statistical CSI is available at the destination node. For both cases, our study reveals that the Gaussian distribution is generally not optimal to generate AN symbols. The presented AN design takes into account practical parameters for the communication links, such as QAM signaling and maximum likelihood decoding.

]]>Entropy doi: 10.3390/e19080383

Authors: Simone Benella Giuseppe Consolini Fabio Giannattasio Tom Chang Marius Echim

Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS). The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA).

]]>Entropy doi: 10.3390/e19080382

Authors: Yi Zuo Yuya Kajikawa

In most supply chains (SCs), transaction relationships between suppliers and customers are commonly considered to be an extrapolation from a linear perspective. However, this traditional linear concept of an SC is egotistic and oversimplified and does not sufficiently reflect the complex and cyclical structure of supplier-customer relationships in current economic and industrial situations. The interactional relationships and topological characteristics between suppliers and customers should be analyzed using supply networks (SNs) rather than traditional linear SCs. Therefore, this paper reconceptualizes SCs as SNs in complex adaptive systems (CAS), and presents three main contributions. First, we propose an integrated framework of CAS network by synthesizing multi-level network analysis from the network-, community- and vertex-perspective. The CAS perspective enables us to understand the advances of SN properties. Second, in order to emphasize the CAS properties of SNs, we conducted a real-world SN based on the Japanese industry and describe an advanced investigation of SN theory. The CAS properties help in enriching the SN theory, which can benefit SN management, community economics and industrial resilience. Third, we propose a quantitative metric of entropy to measure the complexity and robustness of SNs. The results not only support a specific understanding of the structural outcomes relevant to SNs, but also deliver efficient and effective support to the management and design of SNs.

]]>Entropy doi: 10.3390/e19080381

Authors: Gregor Chliamovitch Orestis Malaspinas Bastien Chopard

In a recent paper (Chliamovitch, et al., 2015), we suggested using the principle of maximum entropy to generalize Boltzmann’s Stosszahlansatz to higher-order distribution functions. This conceptual shift of focus allowed us to derive an analog of the Boltzmann equation for the two-particle distribution function. While we only briefly mentioned there the possibility of a hydrodynamical treatment, we complete here a crucial step towards this program. We discuss bilocal collisional invariants, from which we deduce the two-particle stationary distribution. This allows for the existence of equilibrium states in which the momenta of particles are correlated, as well as for the existence of a fourth conserved quantity besides mass, momentum and kinetic energy.

]]>Entropy doi: 10.3390/e19070379

Authors: Paolo Muratore-Ginanneschi Kay Schwieger

We present a stylized model of controlled equilibration of a small system in a fluctuating environment. We derive the optimal control equations steering in finite-time the system between two equilibrium states. The corresponding thermodynamic transition is optimal in the sense that it occurs at minimum entropy if the set of admissible controls is restricted by certain bounds on the time derivatives of the protocols. We apply our equations to the engineered equilibration of an optical trap considered in a recent proof of principle experiment. We also analyze an elementary model of nucleation previously considered by Landauer to discuss the thermodynamic cost of one bit of information erasure. We expect our model to be a useful benchmark for experiment design as it exhibits the same integrability properties of well-known models of optimal mass transport by a compressible velocity field.

]]>Entropy doi: 10.3390/e19070378

Authors: Jonathan Shimonovich Anelia Somekh-Baruch Shlomo (Shitz)

In this work, we investigate a three-user cognitive communication network where a primary two-user multiple access channel suffers interference from a secondary point-to-point channel, sharing the same medium. While the point-to-point channel transmitter—transmitter 3—causes an interference at the primary multiple access channel receiver, we assume that the primary channel transmitters—transmitters 1 and 2—do not cause any interference at the point-to-point receiver. It is assumed that one of the multiple access channel transmitters has cognitive capabilities and cribs causally from the other multiple access channel transmitter. Furthermore, we assume that the cognitive transmitter knows the message of transmitter 3 in a non-causal manner, thus introducing the three-user multiple access cognitive Z-interference channel. We obtain inner and outer bounds on the capacity region of the this channel for both causal and strictly causal cribbing cognitive encoders. We further investigate different variations and aspects of the channel, referring to some previously studied cases. Attempting to better characterize the capacity region we look at the vertex points of the capacity region where each one of the transmitters tries to achieve its maximal rate. Moreover, we find the capacity region of a special case of a certain kind of more-capable multiple access cognitive Z-interference channels. In addition, we study the case of full unidirectional cooperation between the 2 multiple access channel encoders. Finally, since direct cribbing allows us full cognition in the case of continuous input alphabets, we study the case of partial cribbing, i.e., when the cribbing is performed via a deterministic function.

]]>Entropy doi: 10.3390/e19070380

Authors: Xiaoli Li Chengwei Li

The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used to propose a new method for pretreatment and wavelength selection of near-infrared spectra signal. The near-infrared spectra of glucose solution was used as the research object, the improved CEEMDAN energy entropy was then used to reconstruct spectral data for removing noise, and the useful wavelengths are selected based on PE after spectra segmentation. Firstly, the intrinsic mode functions of original spectra are obtained by improved CEEMDAN algorithm. The useful signal modes and noisy signal modes were then identified by the energy entropy, and the reconstructed spectral signal is the sum of useful signal modes. Finally, the reconstructed spectra were segmented and the wavelengths with abundant glucose information were selected based on PE. To evaluate the performance of the proposed method, support vector regression and partial least square regression were used to build the calibration model using the wavelengths selected by the new method, mutual information, successive projection algorithm, principal component analysis, and full spectra data. The results of the model were evaluated by the correlation coefficient and root mean square error of prediction. The experimental results showed that the improved CEEMDAN energy entropy can effectively reconstruct near-infrared spectra signal and that the PE can effectively solve the wavelength selection. Therefore, the proposed method can improve the precision of spectral analysis and the stability of the model for near-infrared spectra analysis.

]]>Entropy doi: 10.3390/e19070377

Authors: Elgiz Baskaya Guven Komurgoz Ibrahim Ozkol

Dispersion of super-paramagnetic nanoparticles in nonmagnetic carrier fluids, known as ferrofluids, offers the advantages of tunable thermo-physical properties and eliminate the need for moving parts to induce flow. This study investigates ferrofluid flow characteristics in an inclined channel under inclined magnetic field and constant pressure gradient. The ferrofluid considered in this work is comprised of Cu particles as the nanoparticles and water as the base fluid. The governing differential equations including viscous dissipation are non-dimensionalised and discretized with Generalized Differential Quadrature Method. The resulting algebraic set of equations are solved via Newton-Raphson Method. The work done here contributes to the literature by searching the effects of magnetic field angle and channel inclination separately on the entropy generation of the ferrofluid filled inclined channel system in order to achieve best design parameter values so called entropy generation minimization is implemented. Furthermore, the effect of magnetic field, inclination angle of the channel and volume fraction of nanoparticles on velocity and temperature profiles are examined and represented by figures to give a thorough understanding of the system behavior.

]]>Entropy doi: 10.3390/e19070374

Authors: John Fanchi

Jüttner used the conventional theory of relativistic statistical mechanics to calculate the energy of a relativistic ideal gas in 1911. An alternative derivation of the energy of a relativistic ideal gas was published by Horwitz, Schieve and Piron in 1981 within the context of parametrized relativistic statistical mechanics. The resulting energy in the ultrarelativistic regime differs from Jüttner’s result. We review the derivations of energy and identify physical regimes for testing the validity of the two theories in accelerator physics and cosmology.

]]>Entropy doi: 10.3390/e19070375

Authors: Jagdev Singh Devendra Kumar Maysaa Al Qurashi Dumitru Baleanu

In this paper, we propose a new numerical algorithm, namely q-homotopy analysis Sumudu transform method (q-HASTM), to obtain the approximate solution for the nonlinear fractional dynamical model of interpersonal and romantic relationships. The suggested algorithm examines the dynamics of love affairs between couples. The q-HASTM is a creative combination of Sumudu transform technique, q-homotopy analysis method and homotopy polynomials that makes the calculation very easy. To compare the results obtained by using q-HASTM, we solve the same nonlinear problem by Adomian’s decomposition method (ADM). The convergence of the q-HASTM series solution for the model is adapted and controlled by auxiliary parameter ℏ and asymptotic parameter n. The numerical results are demonstrated graphically and in tabular form. The result obtained by employing the proposed scheme reveals that the approach is very accurate, effective, flexible, simple to apply and computationally very nice.

]]>Entropy doi: 10.3390/e19070372

Authors: Cees Diks Hao Fang

The information-theoretical concept transfer entropy is an ideal measure for detecting conditional independence, or Granger causality in a time series setting. The recent literature indeed witnesses an increased interest in applications of entropy-based tests in this direction. However, those tests are typically based on nonparametric entropy estimates for which the development of formal asymptotic theory turns out to be challenging. In this paper, we provide numerical comparisons for simulation-based tests to gain some insights into the statistical behavior of nonparametric transfer entropy-based tests. In particular, surrogate algorithms and smoothed bootstrap procedures are described and compared. We conclude this paper with a financial application to the detection of spillover effects in the global equity market.

]]>Entropy doi: 10.3390/e19070373

Authors: Dariusz Kacprzak

Fuzzy multiple criteria decision-making (FMCDM) methods are techniques of finding the trade-off option out of all feasible alternatives that are characterized by multiple criteria and where data cannot be measured precisely, but can be represented, for instance, by ordered fuzzy numbers (OFNs). One of the main steps in FMCDM methods consist in finding the appropriate criteria weights. A method based on the concept of Shannon entropy is one of many techniques for the determination of criteria weights when obtaining them from the decision-maker is not possible. The goal of the paper is to extend the notion of Shannon entropy to fuzzy data represented by OFNs. The proposed approach allows to obtain criteria weights as OFNs, which are normalized and sum to 1.

]]>Entropy doi: 10.3390/e19070365

Authors: Xiong Luo Jing Deng Weiping Wang Jenq-Haur Wang Wenbing Zhao

Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms.

]]>Entropy doi: 10.3390/e19070358

Authors: Yuyu Yin Yueshen Xu Wenting Xu Min Gao Lifeng Yu Yujie Pei

Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS) predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF), suffer from a series of defects, such as failing to handle data sparsity. In mobile network environments, the abnormal QoS data are likely to result in inferior prediction accuracy. Unfortunately, these problems have not attracted enough attention, especially in a mixed mobile network environment with different network configurations, generations, or types. An ensemble learning method for predicting missing QoS in 5G network environments is proposed in this paper. There are two key principles: one is the newly proposed similarity computation method for identifying similar neighbors; the other is the extended ensemble learning model for discovering and filtering fake neighbors from the preliminary neighbors set. Moreover, three prediction models are also proposed, two individual models and one combination model. They are used for utilizing the user similar neighbors and servicing similar neighbors, respectively. Experimental results conducted in two real-world datasets show our approaches can produce superior prediction accuracy.

]]>Entropy doi: 10.3390/e19070371

Authors: Tyll Krueger Janusz Szwabiński Tomasz Weron

Understanding and quantifying polarization in social systems is important because of many reasons. It could for instance help to avoid segregation and conflicts in the society or to control polarized debates and predict their outcomes. In this paper, we present a version of the q-voter model of opinion dynamics with two types of responses to social influence: conformity (like in the original q-voter model) and anticonformity. We put the model on a social network with the double-clique topology in order to check how the interplay between those responses impacts the opinion dynamics in a population divided into two antagonistic segments. The model is analyzed analytically, numerically and by means of Monte Carlo simulations. Our results show that the system undergoes two bifurcations as the number of cross-links between cliques changes. Below the first critical point, consensus in the entire system is possible. Thus, two antagonistic cliques may share the same opinion only if they are loosely connected. Above that point, the system ends up in a polarized state.

]]>Entropy doi: 10.3390/e19070369

Authors: Michel Feidt

Finite Time Thermodynamics is generally associated with the Curzon–Ahlborn approach to the Carnot cycle. Recently, previous publications on the subject were discovered, which prove that the history of Finite Time Thermodynamics started more than sixty years before even the work of Chambadal and Novikov (1957). The paper proposes a careful examination of the similarities and differences between these pioneering works and the consequences they had on the works that followed. The modelling of the Carnot engine was carried out in three steps, namely (1) modelling with time durations of the isothermal processes, as done by Curzon and Ahlborn; (2) modelling at a steady-state operation regime for which the time does not appear explicitly; and (3) modelling of transient conditions which requires the time to appear explicitly. Whatever the method of modelling used, the subsequent optimization appears to be related to specific physical dimensions. The main goal of the methodology is to choose the objective function, which here is the power, and to define the associated constraints. We propose a specific approach, focusing on the main functions that respond to engineering requirements. The study of the Carnot engine illustrates the synthesis carried out and proves that the primary interest for an engineer is mainly connected to what we called Finite (physical) Dimensions Optimal Thermodynamics, including time in the case of transient modelling.

]]>Entropy doi: 10.3390/e19070370

Authors: Shujun Liu Ting Yang Hongqing Liu

This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.

]]>Entropy doi: 10.3390/e19070368

Authors: L.G. Margolin

In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies the Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.

]]>Entropy doi: 10.3390/e19070367

Authors: Wei Zhang Christof Schütte

Many interesting rare events in molecular systems, like ligand association, protein folding or conformational changes, occur on timescales that often are not accessible by direct numerical simulation. Therefore, rare event approximation approaches like interface sampling, Markov state model building, or advanced reaction coordinate-based free energy estimation have attracted huge attention recently. In this article we analyze the reliability of such approaches. How precise is an estimate of long relaxation timescales of molecular systems resulting from various forms of rare event approximation methods? Our results give a theoretical answer to this question by relating it with the transfer operator approach to molecular dynamics. By doing so we also allow for understanding deep connections between the different approaches.

]]>Entropy doi: 10.3390/e19070366

Authors: Seyyed Azimi Osvaldo Simeone Ravi Tandon

The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.

]]>Entropy doi: 10.3390/e19070327

Authors: Weiqiang Yang Lixin Xu Hang Li Yabo Wu Jianbo Lu

The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation of state of dark energy. The interaction rate between dark energy and dark matter was a constant parameter, which was, Q = 3 H ξ ( 1 + w x ) ρ x . Based on the Markov chain Monte Carlo method, we made a global fitting on the interacting dark energy model from Planck 2015 cosmic microwave background anisotropy and observational Hubble data. We found that the observational data sets slightly favored a small interaction rate between dark energy and dark matter; however, there was not obvious evidence of interaction at the 1 σ level.

]]>Entropy doi: 10.3390/e19070364

Authors: Silas Fong Vincent Tan

This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ of polar codes for a memoryless channel q Y | X with capacity I ( q Y | X ) characterizes the closest gap between the capacity and non-asymptotic achievable rates as follows: For a fixed ε ∈ ( 0 , 1 ) , the gap between the capacity I ( q Y | X ) and the maximum non-asymptotic rate R n * achieved by a length-n polar code with average error probability ε scales as n - 1 / μ , i.e., I ( q Y | X ) - R n * = Θ ( n - 1 / μ ) . It is well known that the scaling exponent μ for any binary-input memoryless channel (BMC) with I ( q Y | X ) ∈ ( 0 , 1 ) is bounded above by 4 . 714 . Our main result shows that 4 . 714 remains a valid upper bound on the scaling exponent for the AWGN channel. Our proof technique involves the following two ideas: (i) The capacity of the AWGN channel can be achieved within a gap of O ( n - 1 / μ log n ) by using an input alphabet consisting of n constellations and restricting the input distribution to be uniform; (ii) The capacity of a multiple access channel (MAC) with an input alphabet consisting of n constellations can be achieved within a gap of O ( n - 1 / μ log n ) by using a superposition of log n binary-input polar codes. In addition, we investigate the performance of polar codes in the moderate deviations regime where both the gap to capacity and the error probability vanish as n grows. An explicit construction of polar codes is proposed to obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel.

]]>Entropy doi: 10.3390/e19070363

Authors: Andrey Garnaev Wade Trappe

Sharing of radio spectrum between different types of wireless systems (e.g., different service providers) is the foundation for making more efficient usage of spectrum. Cognitive radio technologies have spurred the design of spectrum servers that coordinate the sharing of spectrum between different wireless systems. These servers receive information regarding the needs of each system, and then provide instructions back to each system regarding the spectrum bands they may use. This sharing of information is complicated by the fact that these systems are often in competition with each other: each system desires to use as much of the spectrum as possible to support its users, and each system could learn and harm the bands of the other system. Three problems arise in such a spectrum-sharing problem: (1) how to maintain reliable performance for each system-shared resource (licensed spectrum); (2) whether to believe the resource requests announced by each agent; and (3) if they do not believe, how much effort should be devoted to inspecting spectrum so as to prevent possible malicious activity. Since this problem can arise for a variety of wireless systems, we present an abstract formulation in which the agents or spectrum server introduces obfuscation in the resource assignment to maintain reliability. We derive a closed form expression for the expected damage that can arise from possible malicious activity, and using this formula we find a tradeoff between the amount of extra decoys that must be used in order to support higher communication fidelity against potential interference, and the cost of maintaining this reliability. Then, we examine a scenario where a smart adversary may also use obfuscation itself, and formulate the scenario as a signaling game, which can be solved by applying a classical iterative forward-induction algorithm. For an important particular case, the game is solved in a closed form, which gives conditions for deciding whether an agent can be trusted, or whether its request should be inspected and how intensely it should be inspected.

]]>Entropy doi: 10.3390/e19070360

Authors: Khaled Almgren Minkyu Kim Jeongkyu Lee

Topological data analysis is a noble approach to extract meaningful information from high-dimensional data and is robust to noise. It is based on topology, which aims to study the geometric shape of data. In order to apply topological data analysis, an algorithm called mapper is adopted. The output from mapper is a simplicial complex that represents a set of connected clusters of data points. In this paper, we explore the feasibility of topological data analysis for mining social network data by addressing the problem of image popularity. We randomly crawl images from Instagram and analyze the effects of social context and image content on an image’s popularity using mapper. Mapper clusters the images using each feature, and the ratio of popularity in each cluster is computed to determine the clusters with a high or low possibility of popularity. Then, the popularity of images are predicted to evaluate the accuracy of topological data analysis. This approach is further compared with traditional clustering algorithms, including k-means and hierarchical clustering, in terms of accuracy, and the results show that topological data analysis outperforms the others. Moreover, topological data analysis provides meaningful information based on the connectivity between the clusters.

]]>Entropy doi: 10.3390/e19070359

Authors: Omar Olvera-Guerrero Alfonso Prieto-Guerrero Gilberto Espinosa-Paredes

There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical systems that may even exhibit chaotic dynamics that normally preclude the use of the DR when the BWR is working at a specific operating point during instability. In this work a novel methodology based on an adaptive Shannon Entropy estimator and on Noise Assisted Empirical Mode Decomposition variants is presented. This methodology was developed for real-time implementation of a stability monitor. This methodology was applied to a set of signals stemming from several NPPs reactors (Ringhals-Sweden, Forsmark-Sweden and Laguna Verde-Mexico) under commercial operating conditions, that experienced instabilities events, each one of a different nature.

]]>Entropy doi: 10.3390/e19070362

Authors: Jaber Kakar Aydin Sezgin

Recent advances in the characterization of fundamental limits on interference management in wireless networks and the discovery of new communication schemes on how to handle interference led to a better understanding towards the capacity of such networks. The benefits in terms of achievable rates of powerful schemes handling interference, such as interference alignment, are substantial. However, the main issue behind most of these results is the assumption of perfect channel state information at the transmitters (CSIT). In the absence of channel knowledge the performance of various interference networks collapses to what is achievable by time division multiple access (TDMA). Robustinterference management techniques are promising solutions to maintain high achievable rates at various levels of CSIT, ranging from delayed to imperfect CSIT. In this survey, we outline and study two main research perspectives of how to robustly handle interference for cases where CSIT is imprecise on examples for non-distributed and distributed networks, namely broadcast and X-channel. To quantify the performance of these schemes, we use the well-known (generalized) degrees of freedom (GDoF) metric as the pre-log factor of achievable rates. These perspectives maintain the capacity benefits at similar levels as for perfect channel knowledge. These two perspectives are: First,scheme-adaptationthat explicitly accounts for the level of channel knowledge and, second,relay-aided infrastructure enlargementto decrease channel knowledge dependency. The relaxation on CSIT requirements through these perspectives will ultimately lead to practical realizations of robust interference management techniques. The survey concludes with a discussion of open problems.

]]>Entropy doi: 10.3390/e19070361

Authors: Artemy Kolchinsky Brendan Tracey

Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff α -divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems.

]]>Entropy doi: 10.3390/e19070357

Authors: Clémentine Hauret Pierre Magain Judith Biernaux

Time is a parameter playing a central role in our most fundamental modelling of natural laws. Relativity theory shows that the comparison of times measured by different clocks depends on their relative motion and on the strength of the gravitational field in which they are embedded. In standard cosmology, the time parameter is the one measured by fundamental clocks (i.e., clocks at rest with respect to the expanding space). This proper time is assumed to flow at a constant rate throughout the whole history of the universe. We make the alternative hypothesis that the rate at which the cosmological time flows depends on the dynamical state of the universe. In thermodynamics, the arrow of time is strongly related to the second law, which states that the entropy of an isolated system will always increase with time or, at best, stay constant. Hence, we assume that the time measured by fundamental clocks is proportional to the entropy of the region of the universe that is causally connected to them. Under that simple assumption, we find it possible to build toy cosmological models that present an acceleration of their expansion without any need for dark energy while being spatially closed and finite, avoiding the need to deal with infinite values.

]]>Entropy doi: 10.3390/e19070356

Authors: Andrea Puglisi Umberto Marini Bettolo Marconi

Many kinds of active particles, such as bacteria or active colloids, move in a thermostatted fluid by means of self-propulsion. Energy injected by such a non-equilibrium force is eventually dissipated as heat in the thermostat. Since thermal fluctuations are much faster and weaker than self-propulsion forces, they are often neglected, blurring the identification of dissipated heat in theoretical models. For the same reason, some freedom—or arbitrariness—appears when defining entropy production. Recently three different recipes to define heat and entropy production have been proposed for the same model where the role of self-propulsion is played by a Gaussian coloured noise. Here we compare and discuss the relation between such proposals and their physical meaning. One of these proposals takes into account the heat exchanged with a non-equilibrium active bath: such an “active heat” satisfies the original Clausius relation and can be experimentally verified.

]]>Entropy doi: 10.3390/e19070354

Authors: Zifei Lin Wei Xu Jiaorui Li Wantao Jia Shuang Li

Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method is applied to obtain the approximate analytical solution. Numerical simulations are done to verify the method. The effects of the fractional order, time delay, economic control and random excitation on the amplitude of the economy system are investigated. The results show that time delay, fractional order and intensity of random excitation can all magnify the amplitude and increase the volatility of the economy system.

]]>Entropy doi: 10.3390/e19070355

Authors: Marco Dalai

We review the use of binary hypothesis testing for the derivation of the sphere packing bound in channel coding, pointing out a key difference between the classical and the classical-quantum setting. In the first case, two ways of using the binary hypothesis testing are known, which lead to the same bound written in different analytical expressions. The first method historically compares output distributions induced by the codewords with an auxiliary fixed output distribution, and naturally leads to an expression using the Renyi divergence. The second method compares the given channel with an auxiliary one and leads to an expression using the Kullback–Leibler divergence. In the classical-quantum case, due to a fundamental difference in the quantum binary hypothesis testing, these two approaches lead to two different bounds, the first being the “right” one. We discuss the details of this phenomenon, which suggests the question of whether auxiliary channels are used in the optimal way in the second approach and whether recent results on the exact strong-converse exponent in classical-quantum channel coding might play a role in the considered problem.

]]>Entropy doi: 10.3390/e19070168

Authors: Mohammed Almakki Sharadia Dey Sabyasachi Mondal Precious Sibanda

The entropy generation in unsteady three-dimensional axisymmetric magnetohydrodynamics (MHD) nanofluid flow over a non-linearly stretching sheet is investigated. The flow is subject to thermal radiation and a chemical reaction. The conservation equations are solved using the spectral quasi-linearization method. The novelty of the work is in the study of entropy generation in three-dimensional axisymmetric MHD nanofluid and the choice of the spectral quasi-linearization method as the solution method. The effects of Brownian motion and thermophoresis are also taken into account. The nanofluid particle volume fraction on the boundary is passively controlled. The results show that as the Hartmann number increases, both the Nusselt number and the Sherwood number decrease, whereas the skin friction increases. It is further shown that an increase in the thermal radiation parameter corresponds to a decrease in the Nusselt number. Moreover, entropy generation increases with respect to some physical parameters.

]]>Entropy doi: 10.3390/e19070350

Authors: Lorenzo Caprini Luca Cerino Alessandro Sarracino Angelo Vulpiani

A simplified, but non trivial, mechanical model—gas of N particles of mass m in a box partitioned by n mobile adiabatic walls of mass M—interacting with two thermal baths at different temperatures, is discussed in the framework of kinetic theory. Following an approach due to Smoluchowski, from an analysis of the collisions particles/walls, we derive the values of the main thermodynamic quantities for the stationary non-equilibrium states. The results are compared with extensive numerical simulations; in the limit of large n, m N / M ≫ 1 and m / M ≪ 1 , we find a good approximation of Fourier’s law.

]]>Entropy doi: 10.3390/e19070351

Authors: Baogui Xin Li Liu Guisheng Hou Yuan Ma

By using a linear feedback control technique, we propose a chaos synchronization scheme for nonlinear fractional discrete dynamical systems. Then, we construct a novel 1-D fractional discrete income change system and a kind of novel 3-D fractional discrete system. By means of the stability principles of Caputo-like fractional discrete systems, we lastly design a controller to achieve chaos synchronization, and present some numerical simulations to illustrate and validate the synchronization scheme.

]]>Entropy doi: 10.3390/e19070353

Authors: Lucas Kocia Yifei Huang Peter Love

The Gottesman–Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson–Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson–Gottesman algorithm for qubits are likely due only to the fact that the Weyl–Heisenberg group is not in S U ( d ) for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits.

]]>Entropy doi: 10.3390/e19070347

Authors: Louis Kauffman

We give an exposition of iterant algebra, a generalization of matrix algebra that is motivated by the structure of measurement for discrete processes. We show how Clifford algebras and matrix algebras arise naturally from iterants, and we then use this point of view to discuss the Schrödinger and Dirac equations, Majorana Fermions, representations of the braid group and the framed braids in relation to the structure of the Standard Model for physics.

]]>Entropy doi: 10.3390/e19070346

Authors: Irina-Maria Dragan Alexandru Isaic-Maniu

Studies on the structure of economic systems are, most frequently, carried out by the methods of informational statistics. These methods, often accompanied by a broad range of indicators (Shannon entropy, Balassa coefficient, Herfindahl specialty index, Gini coefficient, Theil index, etc.) around which a wide literature has been created over time, have a major disadvantage. Their weakness is related to the imposition of the system condition, which indicates the need to know all of the components of the system (as absolute values or as weights). This restriction is difficult to accomplish in some situations, while in others this knowledge may be irrelevant, especially when there is an interest in structural changes only in some of the components of the economic system (either we refer to the typology of economic activities—NACE, or of territorial units—Nomenclature of territorial units for statistics (NUTS)). This article presents a procedure for characterizing the structure of a system and for comparing its evolution over time, in the case of incomplete information, thus eliminating the restriction existent in the classical methods. The proposed methodological alternative uses a parametric distribution, with sub-unit values for the variable. The application refers to Gross Domestic Product values for five of the 28 European Union countries, with annual values of over 1000 billion Euros (Germany, Spain, France, Italy, and United Kingdom) for the years 2003 and 2015. A form of the Wald sequential test is applied to measure changes in the structure of this group of countries, between the years compared. The results of this application validate the proposed method.

]]>Entropy doi: 10.3390/e19070349

Authors: Hayder Al-Hraishawi Gayan Baduge Rafael Schaefer

In this paper, a secure communication model for cognitive multi-user massive multiple-input multiple-output (MIMO) systems with underlay spectrum sharing is investigated. A secondary (cognitive) multi-user massive MIMO system is operated by using underlay spectrum sharing within a primary (licensed) multi-user massive MIMO system. A passive multi-antenna eavesdropper is assumed to be eavesdropping upon either the primary or secondary confidential transmissions. To this end, a physical layer security strategy is provisioned for the primary and secondary transmissions via artificial noise (AN) generation at the primary base-station (PBS) and zero-forcing precoders. Specifically, the precoders are constructed by using the channel estimates with pilot contamination. In order to degrade the interception of confidential transmissions at the eavesdropper, the AN sequences are transmitted at the PBS by exploiting the excess degrees-of-freedom offered by its massive antenna array and by using random AN shaping matrices. The channel estimates at the PBS and secondary base-station (SBS) are obtained by using non-orthogonal pilot sequences transmitted by the primary user nodes (PUs) and secondary user nodes (SUs), respectively. Hence, these channel estimates are affected by intra-cell pilot contamination. In this context, the detrimental effects of intra-cell pilot contamination and channel estimation errors for physical layer secure communication are investigated. For this system set-up, the average and asymptotic achievable secrecy rate expressions are derived in closed-form. Specifically, these performance metrics are studied for imperfect channel state information (CSI) and for perfect CSI, and thereby, the secrecy rate degradation due to inaccurate channel knowledge and intra-cell pilot contamination is quantified. Our analysis reveals that a physical layer secure communication can be provisioned for both primary and secondary massive MIMO systems even with the channel estimation errors and pilot contamination.

]]>Entropy doi: 10.3390/e19070348

Authors: Rong A Liping Pang Meng Liu Dongsheng Yang

Carbon Dioxide Removal Assembly (CDRA) is one of the most important systems in the Environmental Control and Life Support System (ECLSS) for a manned spacecraft. With the development of adsorbent and CDRA technology, solid amine is increasingly paid attention due to its obvious advantages. However, a manned spacecraft is launched far from the Earth, and its resources and energy are restricted seriously. These limitations increase the design difficulty of solid amine CDRA. The purpose of this paper is to seek optimal design parameters for the solid amine CDRA. Based on a preliminary structure of solid amine CDRA, its heat and mass transfer models are built to reflect some features of the special solid amine adsorbent, Polyethylenepolyamine adsorbent. A multi-objective optimization for the design of solid amine CDRA is discussed further in this paper. In this study, the cabin CO2 concentration, system power consumption and entropy production are chosen as the optimization objectives. The optimization variables consist of adsorption cycle time, solid amine loading mass, adsorption bed length, power consumption and system entropy production. The Improved Non-dominated Sorting Genetic Algorithm (NSGA-II) is used to solve this multi-objective optimization and to obtain optimal solution set. A design example of solid amine CDRA in a manned space station is used to show the optimal procedure. The optimal combinations of design parameters can be located on the Pareto Optimal Front (POF). Finally, Design 971 is selected as the best combination of design parameters. The optimal results indicate that the multi-objective optimization plays a significant role in the design of solid amine CDRA. The final optimal design parameters for the solid amine CDRA can guarantee the cabin CO2 concentration within the specified range, and also satisfy the requirements of lightweight and minimum energy consumption.

]]>Entropy doi: 10.3390/e19070344

Authors: Keiko Uohashi

In this paper, we study the construction of α -conformally equivalent statistical manifolds for a given symmetric cubic form on a Riemannian manifold. In particular, we describe a method to obtain α -conformally equivalent connections from the relation between tensors and the symmetric cubic form.

]]>Entropy doi: 10.3390/e19070345

Authors: Leonid Martyushev

A measure of time is related to the number of ways by which the human correlates the past and the future for some process. On this basis, a connection between time and entropy (information, Boltzmann–Gibbs, and thermodynamic one) is established. This measure gives time such properties as universality, relativity, directionality, and non-uniformity. A number of issues of the modern science related to the finding of laws describing changes in nature are discussed. A special emphasis is made on the role of evolutionary adaptation of an observer to the surrounding world.

]]>Entropy doi: 10.3390/e19070342

Authors: Yuxing Li Yaan Li Xiao Chen Jing Yu

In view of the problem that the features of ship-radiated noise are difficult to extract and inaccurate, a novel method based on variational mode decomposition (VMD), multi-scale permutation entropy (MPE) and a support vector machine (SVM) is proposed to extract the features of ship-radiated noise. In order to eliminate mode mixing and extract the complexity of the intrinsic mode function (IMF) accurately, VMD is employed to decompose the three types of ship-radiated noise instead of Empirical Mode Decomposition (EMD) and its extended methods. Considering the reason that the permutation entropy (PE) can quantify the complexity only in one scale, the MPE is used to extract features in different scales. In this study, three types of ship-radiated noise signals are decomposed into a set of band-limited IMFs by the VMD method, and the intensity of each IMF is calculated. Then, the IMFs with the highest energy are selected for the extraction of their MPE. By analyzing the separability of MPE at different scales, the optimal MPE of the IMF with the highest energy is regarded as the characteristic vector. Finally, the feature vectors are sent into the SVM classifier to classify and recognize different types of ships. The proposed method was applied in simulated signals and actual signals of ship-radiated noise. By comparing with the PE of the IMF with the highest energy by EMD, ensemble EMD (EEMD) and VMD, the results show that the proposed method can effectively extract the features of MPE and realize the classification and recognition for ships.

]]>Entropy doi: 10.3390/e19070341

Authors: Mario Martinelli

The present age, which can be called the Information Age, has a core technology constituted by bits transported by photons. Both concepts, bit and photon, originated in the past century: the concept of photon was introduced by Planck in 1900 when he advanced the solution of the blackbody spectrum, and bit is a term first used by Shannon in 1948 when he introduced the theorems that founded information theory. The connection between Planck and Shannon is not immediately apparent; nor is it obvious that they derived their basic results from the concept of entropy. Examination of other important scientists can shed light on Planck’s and Shannon’s work in these respects. Darwin and Fowler, who in 1922 published a couple of papers where they reinterpreted Planck’s results, pointed out the centrality of the partition function to statistical mechanics and thermodynamics. The same roots have been more recently reconsidered by Jaynes, who extended the considerations advanced by Darwin and Fowler to information theory. This paper investigates how the concept of entropy was propagated in the past century in order to show how a simple intuition, born in the 1824 during the first industrial revolution in the mind of the young French engineer Carnot, is literally still enlightening the fourth industrial revolution and probably will continue to do so in the coming century.

]]>Entropy doi: 10.3390/e19070343

Authors: Lawrence Schulman

Establishing (or falsifying) the special state theory of quantum measurement is a program with both theoretical and experimental directions. The special state theory has only pure unitary time evolution, like the many worlds interpretation, but only has one world. How this can be accomplished requires both “special states” and significant modification of the usual assumptions about the arrow of time. All this is reviewed below. Experimentally, proposals for tests already exist and the problems are first the practical one of doing the experiment and second the suggesting of other experiments. On the theoretical level, many problems remain and among them are the impact of particle statistics on the availability of special states, finding a way to estimate their abundance and the possibility of using a computer for this purpose. Regarding the arrow of time, there is an early proposal of J. A. Wheeler that may be implementable with implications for cosmology.

]]>Entropy doi: 10.3390/e19070339

Authors: Claudio Cremaschini Massimo Tessarotto

Key aspects of the manifestly-covariant theory of quantum gravity (Cremaschini and Tessarotto 2015–2017) are investigated. These refer, first, to the establishment of the four-scalar, manifestly-covariant evolution quantum wave equation, denoted as covariant quantum gravity (CQG) wave equation, which advances the quantum state ψ associated with a prescribed background space-time. In this paper, the CQG-wave equation is proved to follow at once by means of a Hamilton–Jacobi quantization of the classical variational tensor field g ≡ g μ ν and its conjugate momentum, referred to as (canonical) g-quantization. The same equation is also shown to be variational and to follow from a synchronous variational principle identified here with the quantum Hamilton variational principle. The corresponding quantum hydrodynamic equations are then obtained upon introducing the Madelung representation for ψ , which provides an equivalent statistical interpretation of the CQG-wave equation. Finally, the quantum state ψ is proven to fulfill generalized Heisenberg inequalities, relating the statistical measurement errors of quantum observables. These are shown to be represented in terms of the standard deviations of the metric tensor g ≡ g μ ν and its quantum conjugate momentum operator.

]]>Entropy doi: 10.3390/e19070311

Authors: Alain Deville Yannick Deville

Blind Source Separation (BSS) is an active domain of Classical Information Processing, with well-identified methods and applications. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS), with a recent extension towards Blind Quantum Process Tomography (BQPT). This article investigates the use of several fundamental quantum concepts in the BQSS context and establishes properties already used without justification in that context. It mainly considers a pair of electron spins initially separately prepared in a pure state and then submitted to an undesired exchange coupling between these spins. Some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum measurements, upon BQSS solutions, are discussed. An unentanglement criterion is established for the state of an arbitrary qubit pair, expressed first with probability amplitudes and secondly with probabilities. The interest of using the concept of a random quantum state in the BQSS context is presented. It is stressed that the concept of statistical independence of the sources, widely used in classical BSS, should be used with care in BQSS, and possibly replaced by some disentanglement principle. It is shown that the coefficients of the development of any qubit pair pure state over the states of an orthonormal basis can be expressed with the probabilities of results in the measurements of well-chosen spin components.

]]>Entropy doi: 10.3390/e19070330

Authors: Xiaoyue Feng Yanchun Liang Xiaohu Shi Dong Xu Xu Wang Renchu Guan

Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM), suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified. In this paper, we propose a quantitative measure of overfitting referred to as the rate of overfitting (RO) and a novel model, named AdaBELM, to reduce the overfitting. With RO, the overfitting problem can be quantitatively measured and identified. The newly proposed model can achieve high performance on multi-class text classification. To evaluate the generalizability of the new model, we designed experiments based on three datasets, i.e., the 20 Newsgroups, Reuters-21578, and BioMed corpora, which represent balanced, unbalanced, and real application data, respectively. Experiment results demonstrate that AdaBELM can reduce overfitting and outperform classical ELM, decision tree, random forests, and AdaBoost on all three text-classification datasets; for example, it can achieve 62.2% higher accuracy than ELM. Therefore, the proposed model has a good generalizability.

]]>Entropy doi: 10.3390/e19070335

Authors: Dominik Strzałka

The aim of this paper is to present some preliminary results and non-extensive statistical properties of selected operating system counters related to hard drive behaviour. A number of experiments have been carried out in order to generate the workload and analyse the behaviour of computers during man–machine interaction. All analysed computers were personal ones, worked under Windows operating systems. The research was conducted to demonstrate how the concept of non-extensive statistical mechanics can be helpful in the description of computer systems behaviour, especially in the context of statistical properties with scaling phenomena, long-term dependencies and statistical self-similarity. The studies have been made on the basis of perfmon tool that allows the user to trace operating systems counters during processing.

]]>Entropy doi: 10.3390/e19070338

Authors: Jun-Lin Lin Laksamee Khomnotai

With a privacy-aware reputation system, an auction website allows the buyer in a transaction to hide his/her identity from the public for privacy protection. However, fraudsters can also take advantage of this buyer-anonymized function to hide the connections between themselves and their accomplices. Traditional fraudster detection methods become useless for detecting such fraudsters because these methods rely on accessing these connections to work effectively. To resolve this problem, we introduce two attributes to quantify the buyer-anonymized activities associated with each user and use them to reinforce the traditional methods. Experimental results on a dataset crawled from an auction website show that the proposed attributes effectively enhance the prediction accuracy for detecting fraudsters, particularly when the proportion of the buyer-anonymized activities in the dataset is large. Because many auction websites have adopted privacy-aware reputation systems, the two proposed attributes should be incorporated into their fraudster detection schemes to combat these fraudulent activities.

]]>Entropy doi: 10.3390/e19070336

Authors: Kazuho Watanabe

Kernel methods have been used for turning linear learning algorithms into nonlinear ones. These nonlinear algorithms measure distances between data points by the distance in the kernel-induced feature space. In lossy data compression, the optimal tradeoff between the number of quantized points and the incurred distortion is characterized by the rate-distortion function. However, the rate-distortion functions associated with distortion measures involving kernel feature mapping have yet to be analyzed. We consider two reconstruction schemes, reconstruction in input space and reconstruction in feature space, and provide bounds to the rate-distortion functions for these schemes. Comparison of the derived bounds to the quantizer performance obtained by the kernel K -means method suggests that the rate-distortion bounds for input space and feature space reconstructions are informative at low and high distortion levels, respectively.

]]>Entropy doi: 10.3390/e19070334

Authors: Bo Li Rui Du Wenjing Kang Gongliang Liu

The Internet of Things (IoT) is placing new demands on existing communication systems. The limited orthogonal resources do not meet the demands of massive connectivity of future IoT systems that require efficient multiple access. Interleave-division multiple access (IDMA) is a promising method of improving spectral efficiency and supporting massive connectivity for IoT networks. In a given time, not all sensors signal information to an aggregation node, but each node transmits a short frame on occasion, e.g., time-controlled or event-driven. The sporadic nature of the uplink transmission, low data rates, and massive connectivity in IoT scenarios necessitates minimal control overhead communication schemes. Therefore, sensor activity and data detection should be implemented on the receiver side. However, the current chip-by-chip (CBC) iterative multi-user detection (MUD) assumes that sensor activity is precisely known at the receiver. In this paper, we propose three schemes to solve the MUD problem in a sporadic IDMA uplink transmission system. Firstly, inspired by the observation of sensor sparsity, we incorporate compressed sensing (CS) to MUD in order to jointly perform activity and data detection. Secondly, as CS detection could provide reliable activity detection, we combine CS and CBC and propose a CS-CBC detector. In addition, a CBC-based MUD named CBC-AD is proposed to provide a comparable baseline scheme.

]]>Entropy doi: 10.3390/e19070337

Authors: Mikhail Sheremet Teodor Grosan Ioan Pop

Natural convection heat transfer combined with entropy generation in a square cavity filled with a nanofluid under the effect of variable temperature distribution along left vertical wall has been studied numerically. Governing equations formulated in dimensionless non-primitive variables with corresponding boundary conditions taking into account the Brownian diffusion and thermophoresis effects have been solved by finite difference method. Distribution of streamlines, isotherms, local entropy generation as well as Nusselt number has been obtained for different values of key parameters. It has been found that a growth of the amplitude of the temperature distribution along the left wall and an increase of the wave number lead to an increase in the average entropy generation. While an increase in abovementioned parameters for low Rayleigh number illustrates a decrease in average Bejan number.

]]>Entropy doi: 10.3390/e19070332

Authors: Chen Xu Chengke Hu Xiaoli Liu Sijing Wang

Based on the Markov model and the basic theory of information entropy, this paper puts forward a new method for optimizing the location of observation points in order to obtain more information from limited geological investigation. According to the existing data from observation points data, classification of tunnel geological lithology was performed, and various lithology distribution were determined along the tunnel using the Markov model and theory. On the basis of the information entropy theory, the distribution of information entropy was obtained along the axis of the tunnel. Therefore, different information entropy could be acquired by calculating different classification of rocks. Furthermore, uncertainty increases when information entropy increased. The maximum entropy indicates maximum uncertainty and thus, this value determines the position of the new drilling hole. A new geology situation will be decided by the maximum entropy for the lowest accuracy. Optimal distribution will be obtained after recalculating, using the new location of the geology situation. Taking the engineering for the Bashiyi Daban water diversion tunneling in Xinjiang as a case, the maximum information entropy of the geological conditions was analyzed by the method proposed in the present study, with 25 newly added geology observation points along the axis of the 30-km tunnel. The results proved the validity of the present method. The method and results in this paper may be used not only to predict the geological conditions of underground engineering based on the investigated geological information, but also to optimize the distribution of the geology observation points.

]]>Entropy doi: 10.3390/e19070333

Authors: Jordan Horowitz Jeremey England

There are many functional contexts where it is desirable to maintain a mesoscopic system in a nonequilibrium state. However, such control requires an inherent energy dissipation. In this article, we unify and extend a number of works on the minimum energetic cost to maintain a mesoscopic system in a prescribed nonequilibrium distribution using ancillary control. For a variety of control mechanisms, we find that the minimum amount of energy dissipation necessary can be cast as an information-theoretic measure of distinguishability between the target nonequilibrium state and the underlying equilibrium distribution. This work offers quantitative insight into the intuitive idea that more energy is needed to maintain a system farther from equilibrium.

]]>Entropy doi: 10.3390/e19070328

Authors: Johannes Rauh Pradeep Banerjee Eckehard Olbrich Jürgen Jost Nils Bertschinger

We consider the problem of quantifying the information shared by a pair of random variables X 1 , X 2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f ( S ) for any function f. We show that our measure leads to a new nonnegative decomposition of the mutual information I ( S ; X 1 X 2 ) into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic.

]]>Entropy doi: 10.3390/e19070329

Authors: Dong Seo Yun Chung

Recently, green networks are considered as one of the hottest topics in Information and Communication Technology (ICT), especially in mobile communication networks. In a green network, energy saving of network nodes such as base stations (BSs), switches, and servers should be achieved efficiently. In this paper, we consider a heterogeneous network architecture in 5G networks with separated data and control planes, where basically a macro cell manages control signals and a small cell manages data traffic. Then, we propose an optimized handover scheme based on context information such as reference signal received power, speed of user equipment (UE), traffic load, call admission control level, and data type. In this paper, the main objective of the proposed optimal handover is either to reduce the number of handovers or the total energy consumption of BSs. To this end, we develop optimization problems with either the minimization of the number of total handovers or the minimization of energy consumption of BSs as the objective function of the optimization problem. The solution of the optimization problem is obtained by particle swarm optimization, since the developed optimization problem is an NP hard problem. Performance analysis results via simulation based on various probability distributions of the characteristics of UE and BS show that the proposed optimized handover based on context information performs better than the previous call admission control based handover scheme, from the perspective of the number of handovers and total energy consumption. We also show that the proposed handover scheme can efficiently reduce either the number of handovers or the total energy consumption by applying either handover minimization or energy minimization depending on the objective of the application.

]]>