Entropy doi: 10.3390/e19100544

Authors: Shamik Gupta Stefano Ruffo

We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics aim to generate the canonical equilibrium distribution of a system at a desired temperature by employing a set of time-reversible, deterministic equations of motion. A signature of canonical equilibrium is a single-particle momentum distribution that is Gaussian. We find that the equilibrium properties of the system within the Nosé–Hoover dynamics coincides with that within the canonical ensemble. Moreover, starting from out-of-equilibrium initial conditions, the average kinetic energy of the system relaxes to its target value over a size-independent timescale. However, quite surprisingly, our results indicate that under the same conditions and with only long-range interactions present in the system, the momentum distribution relaxes to its Gaussian form in equilibrium over a scale that diverges with the system size. On adding short-range interactions, the relaxation is found to occur over a timescale that has a much weaker dependence on system size. This system-size dependence of the timescale vanishes when only short-range interactions are present in the system. An implication of such an ultra-slow relaxation when only long-range interactions are present in the system is that macroscopic observables other than the average kinetic energy when estimated in the Nosé–Hoover dynamics may take an unusually long time to relax to its canonical equilibrium value. Our work underlines the crucial role that interactions play in deciding the equivalence between Nosé–Hoover and canonical equilibrium.

]]>Entropy doi: 10.3390/e19100545

Authors: Chung Chan

The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion by offering a more structured achieving scheme and some simpler conjectures to prove.

]]>Entropy doi: 10.3390/e19100543

Authors: Alejandro Chinea

In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under significant scrutiny, in large part due to problems of how to define and test the cognitive performance of animals. This paper presents evidence supporting complex cognition in cetaceans obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge. This theoretical framework explaining animal intelligence in neural computational terms is supported using a new mathematical model. Two pathways leading to higher levels of intelligence in animals are identified, each reflecting a trade-off either in energetic requirements or the number of neurons used. A description of the evolutionary pathway that led to increased cognitive capacities in cetacean brains is detailed and evidence supporting complex cognition in cetaceans is presented. This paper also provides an interpretation of the adaptive function of cetacean neuronal traits.

]]>Entropy doi: 10.3390/e19100540

Authors: Juan Diaz Diego Mateos Carina Boyallian

In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this problem. In this work, we introduce a novel method—the permutation Lempel–Ziv Complexity vs. Permutation Entropy map. We apply this method to the EEGs of two patients with specific diagnosed pathologies during respective follow up processes of pharmacological changes in order to detect alterations that are not evident with the usual inspection method. The method allows for comparing between different states of the patients’ treatment, with a healthy control group, given global information about the signal, supplementing the traditional method of visual inspection of EEG.

]]>Entropy doi: 10.3390/e19100542

Authors: Martin Gueuning Renaud Lambiotte Jean-Charles Delvenne

We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s tendency to backtrack at each step (return where he/she comes from), as well as the resulting effect on the mixing rate of the process. As we show through empirical data, non-Poisson dynamics may significantly slow down diffusion due to backtracking, by a mechanism intrinsically different from the standard bus paradox and related temporal mechanisms. We conclude by discussing the implications of our work for the interpretation of results generated by null models of temporal networks.

]]>Entropy doi: 10.3390/e19100541

Authors: Nibaldo Rodriguez Guillermo Cabrera Carolina Lagos Enrique Cabrera

The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy as an important fault signature pattern. To compute the entropy, we propose combining stationary wavelet transform and singular value decomposition. The resulting feature extraction method, that we call stationary wavelet singular entropy (SWSE), aims to improve the accuracy of the diagnostics of bearing failure by finding a small number of high-quality fault signature patterns. The features extracted by the SWSE are then passed on to a kernel extreme learning machine (KELM) classifier. The proposed SWSE-KELM algorithm is evaluated using two bearing vibration signal databases obtained from Case Western Reserve University. We compare our SWSE feature extraction method to other well-known methods in the literature such as stationary wavelet packet singular entropy (SWPSE) and decimated wavelet packet singular entropy (DWPSE). The experimental results show that the SWSE-KELM consistently outperforms both the SWPSE-KELM and DWPSE-KELM methods. Further, our SWSE method requires fewer features than the other two evaluated methods, which makes our SWSE-KELM algorithm simpler and faster.

]]>Entropy doi: 10.3390/e19100539

Authors: Carlos Plata Antonio Prados

We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to a general class of models with conserved momentum and non-conserved energy. Our theoretical predictions, obtained within the first Sonine approximation, show an excellent agreement with the numerical results. Furthermore, we prove that the observed non-monotonic relaxation is consistent with the monotonic decay of the non-equilibrium entropy.

]]>Entropy doi: 10.3390/e19100538

Authors: Guoqiang Xu Haochun Zhang Xiu Zhang Yan Jin

Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled on related positions following the TO design. However, owing to the integration and rotation of materials in tunable cells, they might lead to extra thermal losses as compared with the previous continuum design. This paper focuses on investigating the thermodynamic properties of tunable cells under related design parameters. The universal expression for the local entropy generation rate in such metamaterial systems is obtained considering the influence of rotation. A series of contrast schemes are established to describe the thermodynamic process and thermal energy distributions from the viewpoint of entropy analysis. Moreover, effects of design parameters on thermal dissipations and system irreversibility are investigated. In conclusion, more thermal dissipations and stronger thermodynamic processes occur in a system with larger conductivity ratios and rotation angles. This paper presents a detailed description of the thermodynamic properties of metamaterial tunable cells and provides reference for selecting appropriate design parameters on related positions to fabricate more efficient and energy-economical switchable TO devices.

]]>Entropy doi: 10.3390/e19100536

Authors: Francisco Vega Reyes Antonio Lasanta

We analyze the transport properties of a low density ensemble of identical macroscopic particles immersed in an active fluid. The particles are modeled as inelastic hard spheres (granular gas). The non-homogeneous active fluid is modeled by means of a non-uniform stochastic thermostat. The theoretical results are validated with a numerical solution of the corresponding the kinetic equation (direct simulation Monte Carlo method). We show a steady flow in the system that is accurately described by Navier-Stokes (NS) hydrodynamics, even for high inelasticity. Surprisingly, we find that the deviations from NS hydrodynamics for this flow are stronger as the inelasticity decreases. The active fluid action is modeled here with a non-uniform fluctuating volume force. This is a relevant result given that hydrodynamics of particles in complex environments, such as biological crowded environments, is still a question under intense debate.

]]>Entropy doi: 10.3390/e19100535

Authors: Alessandro Bravetti

We give a short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics. Some relevant examples are provided along the way. We conclude by giving insights into possible future directions.

]]>Entropy doi: 10.3390/e19100532

Authors: Mingtian Li Jihua Ma

We consider the sets of quasi-regular points in the countable symbolic space. We measure the sizes of the sets by Billingsley-Hausdorff dimension defined by Gibbs measures. It is shown that the dimensions of those sets, always bounded from below by the convergence exponent of the Gibbs measure, are given by a variational principle, which generalizes Li and Ma’s result and Bowen’s result.

]]>Entropy doi: 10.3390/e19100534

Authors: Chinmaya Panigrahy Angel Garcia-Pedrero Ayan Seal Dionisio Rodríguez-Esparragón Nihar Mahato Consuelo Gonzalo-Martín

The Fractal Dimension (FD) of an image defines the roughness using a real number which is highly associated with the human perception of surface roughness. It has been applied successfully for many computer vision applications such as texture analysis, segmentation and classification. Several techniques can be found in literature to estimate FD. One such technique is Differential Box Counting (DBC). Its performance is influenced by many parameters. In particular, the box height is directly related to the gray-level variations over image grid, which badly affects the performance of DBC. In this work, a new method for estimating box height is proposed without changing the other parameters of DBC. The proposed box height has been determined empirically and depends only on the image size. All the experiments have been performed on simulated Fractal Brownian Motion (FBM) Database and Brodatz Database. It has been proved experimentally that the proposed box height allow to improve the performance of DBC, Shifting DBC, Improved DBC and Improved Triangle DBC, which are closer to actual FD values of the simulated FBM images.

]]>Entropy doi: 10.3390/e19100523

Authors: Berik Koichubekov Viktor Riklefs Marina Sorokina Ilya Korshukov Lyudmila Turgunova Yelena Laryushina Riszhan Bakirova Gulmira Muldaeva Ernur Bekov Makhabbat Kultenova

Lagged Poincaré plots have been successful in characterizing abnormal cardiac function. However, the current research practices do not favour any specific lag of Poincaré plots, thus complicating the comparison of results of different researchers in their analysis of heart rate of healthy subjects and patients. We researched the informative nature of lagged Poincaré plots in different states of the autonomic nervous system. It was tested in three models: different age groups, groups with different balance of autonomous regulation, and in hypertensive patients. Correlation analysis shows that for lag l = 6, SD1/SD2 has weak (r = 0.33) correlation with linear parameters of heart rate variability (HRV). For l more than 6 it displays even less correlation with linear parameters, but the changes in SD1/SD2 become statistically insignificant. Secondly, surrogate data tests show that the real SD1/SD2 is statistically different from its surrogate value and the conclusion could be made that the heart rhythm has nonlinear properties. Thirdly, the three models showed that for different functional states of the autonomic nervous system (ANS), SD1/SD2 ratio varied only for lags l = 5 and 6. All of this allow to us to give cautious recommendation to use SD1/SD2 with lags 5 and 6 as a nonlinear characteristic of HRV. The received data could be used as the basis for continuing the research in standardisation of nonlinear analytic methods.

]]>Entropy doi: 10.3390/e19100533

Authors: Rui Tang Simon Fong Nilanjan Dey Raymond Wong Sabah Mohammed

Recently, a new algorithm named dynamic group optimization (DGO) has been proposed, which lends itself strongly to exploration and exploitation. Although DGO has demonstrated its efficacy in comparison to other classical optimization algorithms, DGO has two computational drawbacks. The first one is related to the two mutation operators of DGO, where they may decrease the diversity of the population, limiting the search ability. The second one is the homogeneity of the updated population information which is selected only from the companions in the same group. It may result in premature convergence and deteriorate the mutation operators. In order to deal with these two problems in this paper, a new hybridized algorithm is proposed, which combines the dynamic group optimization algorithm with the cross entropy method. The cross entropy method takes advantage of sampling the problem space by generating candidate solutions using the distribution, then it updates the distribution based on the better candidate solution discovered. The cross entropy operator does not only enlarge the promising search area, but it also guarantees that the new solution is taken from all the surrounding useful information into consideration. The proposed algorithm is tested on 23 up-to-date benchmark functions; the experimental results verify that the proposed algorithm over the other contemporary population-based swarming algorithms is more effective and efficient.

]]>Entropy doi: 10.3390/e19100490

Authors: Muhammad Qasim Zafar Hayat Khan Ilyas Khan Qasem Al-Mdallal

The entropy generation due to heat transfer and fluid friction in mixed convective peristaltic flow of methanol-Al2O3 nano fluid is examined. Maxwell’s thermal conductivity model is used in analysis. Velocity and temperature profiles are utilized in the computation of the entropy generation number. The effects of involved physical parameters on velocity, temperature, entropy generation number, and Bejan number are discussed and explained graphically.

]]>Entropy doi: 10.3390/e19100531

Authors: Ryan James James Crutchfield

Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that Shannon information measures (entropy and mutual information, in their conditional and multivariate forms) can fail to accurately ascertain multivariate dependencies due to their conflation of qualitatively different relations among variables. This has broad implications, particularly when employing information to express the organization and mechanisms embedded in complex systems, including the burgeoning efforts to combine complex network theory with information theory. Here, we do not suggest that any aspect of information theory is wrong. Rather, the vast majority of its informational measures are simply inadequate for determining the meaningful relationships among variables within joint probability distributions. We close by demonstrating that such distributions exist across an arbitrary set of variables.

]]>Entropy doi: 10.3390/e19100530

Authors: Abdullah Makkeh Dirk Theis Raul Vicente

Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information M I ( X : Y , Z ) into shared, synergistic, and unique information by way of solving a convex optimization problem. In this paper, we discuss the solution of their Convex Program from theoretical and practical points of view.

]]>Entropy doi: 10.3390/e19100529

Authors: Xin Li Bin Dai Zheng Ma

The model for a broadcast channel with confidential messages (BC-CM) plays an important role in the physical layer security of modern communication systems. In recent years, it has been shown that a noiseless feedback channel from the legitimate receiver to the transmitter increases the secrecy capacity region of the BC-CM. However, at present, the feedback coding scheme for the BC-CM only focuses on producing secret keys via noiseless feedback, and other usages of the feedback need to be further explored. In this paper, we propose a new feedback coding scheme for the BC-CM. The noiseless feedback in this new scheme is not only used to produce secret keys for the legitimate receiver and the transmitter but is also used to generate update information that allows both receivers (the legitimate receiver and the wiretapper) to improve their channel outputs. From a binary example, we show that this full utilization of noiseless feedback helps to increase the secrecy level of the previous feedback scheme for the BC-CM.

]]>Entropy doi: 10.3390/e19100528

Authors: Reinaldo Arellano-Valle Javier Contreras-Reyes Milan Stehlík

The problem of measuring the disparity of a particular probability density function from a normal one has been addressed in several recent studies. The most used technique to deal with the problem has been exact expressions using information measures over particular distributions. In this paper, we consider a class of asymmetric distributions with a normal kernel, called Generalized Skew-Normal (GSN) distributions. We measure the degrees of disparity of these distributions from the normal distribution by using exact expressions for the GSN negentropy in terms of cumulants. Specifically, we focus on skew-normal and modified skew-normal distributions. Then, we establish the Kullback–Leibler divergences between each GSN distribution and the normal one in terms of their negentropies to develop hypothesis testing for normality. Finally, we apply this result to condition factor time series of anchovies off northern Chile.

]]>Entropy doi: 10.3390/e19100527

Authors: Johannes Rauh Pradeep Banerjee Eckehard Olbrich Jürgen Jost Nils Bertschinger David Wolpert

Suppose we have a pair of information channels, κ 1 , κ 2 , with a common input. The Blackwell order is a partial order over channels that compares κ 1 and κ 2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. Equivalently, κ 1 is said to be Blackwell-inferior to κ 2 if and only if κ 1 can be constructed by garbling the output of κ 2 . A related partial order stipulates that κ 2 is more capable than κ 1 if the mutual information between the input and output is larger for κ 2 than for κ 1 for any distribution over inputs. A Blackwell-inferior channel is necessarily less capable. However, examples are known where κ 1 is less capable than κ 2 but not Blackwell-inferior. We show that this may even happen when κ 1 is constructed by coarse-graining the inputs of κ 2 . Such a coarse-graining is a special kind of “pre-garbling” of the channel inputs. This example directly establishes that the expected value of the shared utility function for the coarse-grained channel is larger than it is for the non-coarse-grained channel. This contradicts the intuition that coarse-graining can only destroy information and lead to inferior channels. We also discuss our results in the context of information decompositions.

]]>Entropy doi: 10.3390/e19100526

Authors: Cao Zhao Ercai Chen Xiucheng Hong Xiaoyao Zhou

In this paper, using the notion of packing pressure, we show a formula of packing pressure of a factor map. We also give an application in conformal repellers.

]]>Entropy doi: 10.3390/e19100525

Authors: Ahmed Elaiw Mohammed Alghamdi Nicola Bellomo

This paper presents a modeling approach, followed by entropy calculations of the dynamics of large systems of interacting active particles viewed as living—hence, complex—systems. Active particles are partitioned into functional subsystems, while their state is modeled by a discrete scalar variable, while the state of the overall system is defined by a probability distribution function over the state of the particles. The aim of this paper consists of contributing to a further development of the mathematical kinetic theory of active particles.

]]>Entropy doi: 10.3390/e19100524

Authors: Frank Critchley Paul Marriott

The Information Geometry of extended exponential families has received much recent attention in a variety of important applications, notably categorical data analysis, graphical modelling and, more specifically, log-linear modelling. The essential geometry here comes from the closure of an exponential family in a high-dimensional simplex. In parallel, there has been a great deal of interest in the purely Fisher Riemannian structure of (extended) exponential families, most especially in the Markov chain Monte Carlo literature. These parallel developments raise challenges, addressed here, at a variety of levels: both theoretical and practical—relatedly, conceptual and methodological. Centrally to this endeavour, this paper makes explicit the underlying geometry of these two areas via an analysis of the limiting behaviour of the fundamental geodesics of Information Geometry, these being Amari’s (+1) and (0)-geodesics, respectively. Overall, a substantially more complete account of the Information Geometry of extended exponential families is provided than has hitherto been the case. We illustrate the importance and benefits of this novel formulation through applications.

]]>Entropy doi: 10.3390/e19100522

Authors: Cong Sun Ke Liu Dahu Zheng Wenbao Ai

This paper considers a two-way relay network, where two legitimate users exchange messages through several cooperative relays in the presence of an eavesdropper, and the Channel State Information (CSI) of the eavesdropper is imperfectly known. The Amplify-and-Forward (AF) relay protocol is used. We design the relay beamforming weights to minimize the total relay transmit power, while requiring the Signal-to-Noise-Ratio (SNRs) of the legitimate users to be higher than the given thresholds and the achievable rate of the eavesdropper to be upper-bounded. Due to the imperfect CSI, a robust optimization problem is summarized. A novel iterative algorithm is proposed, where the line search technique is applied, and the feasibility is preserved during iterations. In each iteration, two Quadratically-Constrained Quadratic Programming (QCQP) subproblems and a one-dimensional subproblem are optimally solved. The optimality property of the robust optimization problem is analyzed. Simulation results show that the proposed algorithm performs very close to the non-robust model with perfect CSI, in terms of the obtained relay transmit power; it~achieves higher secrecy rate compared to the existing work. Numerically, the proposed algorithm converges very quickly, and more than 85% of the problems are solved optimally.

]]>Entropy doi: 10.3390/e19100521

Authors: Candelario Hernández-Gómez Rogelio Basurto-Flores Bibiana Obregón-Quintana Lev Guzmán-Vargas

In the present work, we quantify the irregularity of different European languages belonging to four linguistic families (Romance, Germanic, Uralic and Slavic) and an artificial language (Esperanto). We modified a well-known method to calculate the approximate and sample entropy of written texts. We find differences in the degree of irregularity between the families and our method, which is based on the search of regularities in a sequence of symbols, and consistently distinguishes between natural and synthetic randomized texts. Moreover, we extended our study to the case where multiple scales are accounted for, such as the multiscale entropy analysis. Our results revealed that real texts have non-trivial structure compared to the ones obtained from randomization procedures.

]]>Entropy doi: 10.3390/e19100506

Authors: Mohammad Abdollahzadeh Jamalabadi Mohammad Safaei Abdullah Alrashed Truong Nguyen Enio Bandarra Filho

Thermal loading by radiant heaters is used in building heating and hot structure design applications. In this research, characteristics of the thermal radiative heating of an enclosure by a distinct heater are investigated from the second law of thermodynamics point of view. The governing equations of conservation of mass, momentum, and energy (fluid and solid) are solved by the finite volume method and the semi-implicit method for pressure linked equations (SIMPLE) algorithm. Radiant heaters are modeled by constant heat flux elements, and the lower wall is held at a constant temperature while the other boundaries are adiabatic. The thermal conductivity and viscosity of the fluid are temperature-dependent, which leads to complex partial differential equations with nonlinear coefficients. The parameter study is done based on the amount of thermal load (presented by heating number) as well as geometrical configuration parameters, such as the aspect ratio of the enclosure and the radiant heater number. The results present the effect of thermal and geometrical parameters on entropy generation and the distribution field. Furthermore, the effect of thermal radiative heating on both of the components of entropy generation (viscous dissipation and heat dissipation) is investigated.

]]>Entropy doi: 10.3390/e19100520

Authors: Hossein Foroozand Steven Weijs

Over the past two decades, the Bootstrap AGGregatING (bagging) method has been widely used for improving simulation. The computational cost of this method scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The novel procedure proposed in this study is the Entropy Ensemble Filter (EEF), which uses the most informative training data sets in the ensemble rather than all ensemble members created by the bagging method. The results of this study indicate efficiency of the proposed method in application to synthetic data simulation on a sinusoidal signal, a sawtooth signal, and a composite signal. The EEF method can reduce the computational time of simulation by around 50% on average while maintaining predictive performance at the same level of the conventional method, where all of the ensemble models are used for simulation. The analysis of the error gradient (root mean square error of ensemble averages) shows that using the 40% most informative ensemble members of the set initially defined by the user appears to be most effective.

]]>Entropy doi: 10.3390/e19100519

Authors: Dragutin Mihailović Gordan Mimić Paola Gualtieri Ilija Arsenić Carlo Gualtieri

Turbulence is often expressed in terms of either irregular or random fluid flows, without quantification. In this paper, a methodology to evaluate the randomness of the turbulence using measures based on the Kolmogorov complexity (KC) is proposed. This methodology is applied to experimental data from a turbulent flow developing in a laboratory channel with canopy of three different densities. The methodology is even compared with the traditional approach based on classical turbulence statistics.

]]>Entropy doi: 10.3390/e19100518

Authors: Melvin Leok Jun Zhang

The divergence function in information geometry, and the discrete Lagrangian in discrete geometric mechanics each induce a differential geometric structure on the product manifold Q × Q . We aim to investigate the relationship between these two objects, and the fundamental role that duality, in the form of Legendre transforms, plays in both fields. By establishing an analogy between these two approaches, we will show how a fruitful cross-fertilization of techniques may arise from switching formulations based on the cotangent bundle T * Q (as in geometric mechanics) and the tangent bundle T Q (as in information geometry). In particular, we establish, through variational error analysis, that the divergence function agrees with the exact discrete Lagrangian up to third order if and only if Q is a Hessian manifold.

]]>Entropy doi: 10.3390/e19100517

Authors: Giacomo Gradenigo Eric Bertin

Broadly distributed random variables with a power-law distribution f ( m ) ∼ m - ( 1 + α ) are known to generate condensation effects. This means that, when the exponent α lies in a certain interval, the largest variable in a sum of N (independent and identically distributed) terms is for large N of the same order as the sum itself. In particular, when the distribution has infinite mean ( 0 &lt; α &lt; 1 ) one finds unconstrained condensation, whereas for α &gt; 1 constrained condensation takes places fixing the total mass to a large enough value M = ∑ i = 1 N m i &gt; M c . In both cases, a standard indicator of the condensation phenomenon is the participation ratio Y k = 〈 ∑ i m i k / ( ∑ i m i ) k 〉 ( k &gt; 1 ), which takes a finite value for N → ∞ when condensation occurs. To better understand the connection between constrained and unconstrained condensation, we study here the situation when the total mass is fixed to a superextensive value M ∼ N 1 + δ ( δ &gt; 0 ), hence interpolating between the unconstrained condensation case (where the typical value of the total mass scales as M ∼ N 1 / α for α &lt; 1 ) and the extensive constrained mass. In particular we show that for exponents α &lt; 1 a condensate phase for values δ &gt; δ c = 1 / α - 1 is separated from a homogeneous phase at δ &lt; δ c from a transition line, δ = δ c , where a weak condensation phenomenon takes place. We focus on the evaluation of the participation ratio as a generic indicator of condensation, also recalling or presenting results in the standard cases of unconstrained mass and of fixed extensive mass.

]]>Entropy doi: 10.3390/e19100516

Authors: Ofelie De Wel Mario Lavanga Alexander Dorado Katrien Jansen Anneleen Dereymaeker Gunnar Naulaers Sabine Van Huffel

Automated analysis of the electroencephalographic (EEG) data for the brain monitoring of preterm infants has gained attention in the last decades. In this study, we analyze the complexity of neonatal EEG, quantified using multiscale entropy. The aim of the current work is to investigate how EEG complexity evolves during electrocortical maturation and whether complexity features can be used to classify sleep stages. First , we developed a regression model that estimates the postmenstrual age (PMA) using a combination of complexity features. Then, these features are used to build a sleep stage classifier. The analysis is performed on a database consisting of 97 EEG recordings from 26 prematurely born infants, recorded between 27 and 42 weeks PMA. The results of the regression analysis revealed a significant positive correlation between the EEG complexity and the infant’s age. Moreover, the PMA of the neonate could be estimated with a root mean squared error of 1.88 weeks. The sleep stage classifier was able to discriminate quiet sleep from nonquiet sleep with an area under the curve (AUC) of 90%. These results suggest that the complexity of the brain dynamics is a highly useful index for brain maturation quantification and neonatal sleep stage classification.

]]>Entropy doi: 10.3390/e19100515

Authors: Pin-Hsun Lin Eduard Jorswieck

In this overview paper, we introduce an application of stochastic orders in wireless communications. In particular, we show how to use stochastic orders to investigate the ergodic capacity results for fast fading Gaussian memoryless multiuser channels when only the statistics of the channel state information are known at the transmitters (CSIT). In general, the characterization of the capacity region of multiuser channels with only statistical CSIT is open. To attain our goal, in this work we resort to classifying the random channels through their probability distributions by which we can obtain the capacity results. To be more precise, we derive sufficient conditions to attain some information theoretic channel orders such as degraded and very strong interference by exploiting the usual stochastic order and exploiting the same marginal property. After that, we apply the developed scheme to Gaussian interference channels and Gaussian broadcast channels. We also extend the framework to channels with multiple antennas. Possible scenarios for channel enhancement under statistical CSIT are also discussed. Several practical examples such as Rayleigh fading and Nakagami-m fading, etc., illustrate the application of the derived results.

]]>Entropy doi: 10.3390/e19100503

Authors: Jundong Wang Yao Yao

Fatigue damage is an irreversible progression which can be represented by the entropy increase, and it is well known that the second law of thermodynamics can describe an irreversible process. Based on the concept of entropy, the second law of thermodynamics can provide the changing direction of system. In the current study, a new entropy increment model is developed based on the frame work of continuum damage mechanics. The proposed model is applied to determine the entropy increment during the fatigue damage process. Based on the relationship between entropy and fatigue life, a new fatigue life prediction model is proposed with clear physical meaning. To verify the proposed model, eight groups of experiments were performed with different aging and experimental conditions. The theoretical predictions show good agreement with the experimental data. It is noted that with higher aging temperatures, the value of ε th / ε cr becomes larger and the residual fatigue life reduces. The value of ε th / ε cr goes larger and the residual fatigue life becomes shorter with higher strain amplitude.

]]>Entropy doi: 10.3390/e19100514

Authors: Yunfei Hou Feiyan Liu Jianbo Gao Changxiu Cheng Changqing Song

Financial time series analyses have played an important role in developing some of the fundamental economic theories. However, many of the published analyses of financial time series focus on long-term average behavior of a market, and thus shed little light on the temporal evolution of a market, which from time to time may be interrupted by stock crashes and financial crises. Consequently, in terms of complexity science, it is still unknown whether the market complexity during a stock crash decreases or increases. To answer this question, we have examined the temporal variation of permutation entropy (PE) in Chinese stock markets by computing PE from high-frequency composite indies of two stock markets: the Shanghai Stock Exchange (SSE) and the Shenzhen Stock Exchange (SZSE). We have found that PE decreased significantly in two significant time windows, each encompassing a rapid market rise and then a few gigantic stock crashes. One window started in the middle of 2006, long before the 2008 global financial crisis, and continued up to early 2011. The other window was more recent, started in the middle of 2014, and ended in the middle of 2016. Since both windows were at least one year long, and proceeded stock crashes by at least half a year, the decrease in PE can be invaluable warning signs for regulators and investors alike.

]]>Entropy doi: 10.3390/e19100513

Authors: Gregg Jaeger

Werner Heisenberg introduced the notion of quantum potentia in order to accommodate the indeterminism associated with quantum measurement. Potentia captures the capacity of the system to be found to possess a property upon a corresponding sharp measurement in which it is actualized. The specific potentiae of the individual system are represented formally by the complex amplitudes in the measurement bases of the eigenstate in which it is prepared. All predictions for future values of system properties can be made by an experimenter using the probabilities which are the squared moduli of these amplitudes that are the diagonal elements of the density matrix description of the pure ensemble to which the system, so prepared, belongs. Heisenberg considered the change of the ensemble attribution following quantum measurement to be analogous to the classical change in Gibbs’ thermodynamics when measurement of the canonical ensemble enables a microcanonical ensemble description. This analogy, presented by Heisenberg as operating at the epistemic level, is analyzed here. It has led some to claim not only that the change of the state in measurement is classical mechanical, bringing its quantum character into question, but also that Heisenberg held this to be the case. Here, these claims are shown to be incorrect, because the analogy concerns the change of ensemble attribution by the experimenter upon learning the result of the measurement, not the actualization of the potentia responsible for the change of the individual system state which—in Heisenberg’s interpretation of quantum mechanics—is objective in nature and independent of the experimenter’s knowledge.

]]>Entropy doi: 10.3390/e19100512

Authors: Yiduan Wang Shenzhou Zheng Wei Zhang Jun Wang

This paper investigates the complex behaviors and entropy properties for a novel random complex interacting stock price dynamics, which is established by the combination of stochastic contact process and compound Poisson process, concerning with stock return fluctuations caused by the spread of investors’ attitudes and random jump fluctuations caused by the macroeconomic environment, respectively. To better understand the fluctuation complex behaviors of the proposed price dynamics, the entropy analyses of random logarithmic price returns and corresponding absolute returns of simulation dataset with different parameter set are preformed, including permutation entropy, fractional permutation entropy, sample entropy and fractional sample entropy. We found that a larger λ or γ leads to more complex dynamics, and the absolute return series exhibit lower complex dynamics than the return series. To verify the rationality of the proposed compound price model, the corresponding analyses of actual market datasets are also comparatively preformed. The empirical results verify that the proposed price model can reproduce some important complex dynamics of actual stock markets to some extent.

]]>Entropy doi: 10.3390/e19100511

Authors: Eun-jin Kim Lucille-Marie Tenkès Rainer Hollerbach Ovidiu Radulescu

Many systems in nature and laboratories are far from equilibrium and exhibit significant fluctuations, invalidating the key assumptions of small fluctuations and short memory time in or near equilibrium. A full knowledge of Probability Distribution Functions (PDFs), especially time-dependent PDFs, becomes essential in understanding far-from-equilibrium processes. We consider a stochastic logistic model with multiplicative noise, which has gamma distributions as stationary PDFs. We numerically solve the transient relaxation problem and show that as the strength of the stochastic noise increases, the time-dependent PDFs increasingly deviate from gamma distributions. For sufficiently strong noise, a transition occurs whereby the PDF never reaches a stationary state, but instead, forms a peak that becomes ever more narrowly concentrated at the origin. The addition of an arbitrarily small amount of additive noise regularizes these solutions and re-establishes the existence of stationary solutions. In addition to diagnostic quantities such as mean value, standard deviation, skewness and kurtosis, the transitions between different solutions are analysed in terms of entropy and information length, the total number of statistically-distinguishable states that a system passes through in time.

]]>Entropy doi: 10.3390/e19100509

Authors: Ehsan Gholamalizadeh Jae Chung

Energy and exergy analyses were carried out for a pilot parabolic solar dish-Stirling System. The system was set up at a site at Kerman City, located in a sunny desert area of Iran. Variations in energy and exergy efficiency were considered during the daytime hours of the average day of each month in a year. A maximum collector energy efficiency and total energy efficiency of 54% and 12.2%, respectively, were predicted in July, while during the period between November and February the efficiency values were extremely low. The maximum collector exergy efficiency was 41.5% in July, while the maximum total exergy efficiency reached 13.2%. The values of energy losses as a percentage of the total losses of the main parts of the system were also reported. Results showed that the major energy and exergy losses occurred in the receiver. The second biggest portion of energy losses occurred in the Stirling engine, while the portion of exergy loss in the concentrator was higher compared to the Stirling engine. Finally, the performance of the Kerman pilot was compared to that of the EuroDish project.

]]>Entropy doi: 10.3390/e19100510

Authors: Jian Jiao Xindong Sui Shushi Gu Shaohua Wu Qinyu Zhang

The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN), which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI) from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP) is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput.

]]>Entropy doi: 10.3390/e19100508

Authors: Jiping Yang Yijun Feng Wanhua Qiu

Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E) measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10) stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10) stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

]]>Entropy doi: 10.3390/e19100476

Authors: Armando Fontalvo Jose Solano Cristian Pedraza Antonio Bula Arturo Gonzalez Quiroga Ricardo Vasquez Padilla

Low-grade heat sources such as solar thermal, geothermal, exhaust gases and industrial waste heat are suitable alternatives for power generation which can be exploited by means of small-scale Organic Rankine Cycle (ORC). This paper combines thermodynamic optimization and economic analysis to assess the performance of single and dual pressure ORC operating with different organic fluids and targeting small-scale applications. Maximum power output is lower than 45 KW while the temperature of the heat source varies in the range 100–200 °C. The studied working fluids, namely R1234yf, R1234ze(E) and R1234ze(Z), are selected based on environmental, safety and thermal performance criteria. Levelized Cost of Electricity (LCOE) and Specific Investment Cost (SIC) for two operation conditions are presented: maximum power output and maximum thermal efficiency. Results showed that R1234ze(Z) achieves the highest net power output (up to 44 kW) when net power output is optimized. Regenerative ORC achieves the highest performance when thermal efficiency is optimized (up to 18%). Simple ORC is the most cost-effective among the studied cycle configurations, requiring a selling price of energy of 0.3 USD/kWh to obtain a payback period of 8 years. According to SIC results, the working fluid R1234ze(Z) exhibits great potential for simple ORC when compared to conventional R245fa.

]]>Entropy doi: 10.3390/e19090507

Authors: Giancarlo Franzese Ivan Latella J. Rubi

n/a

]]>Entropy doi: 10.3390/e19090498

Authors: Samuel Adesanya Michael Fakoya

In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel walls. The upper wall of the channel is subjected to a constant heat flux while the lower wall is insulated. The equations governing the fluid flow are formulated, non-dimensionalized and solved by using the Adomian decomposition method. The Adomian series solutions for the velocity and temperature fields are then used to compute the entropy generation rate and inherent heat irreversibility in the flow domain. The effects of various fluid parameters are presented graphically and discussed extensively.

]]>Entropy doi: 10.3390/e19090505

Authors: Yan Feng Xue-Qin Jiang Jia Hou Hui-Ming Wang Yi Yang

The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted bits in the AD. The unidirectional SKA, whose transmission rate is 0 . 5, can be realized by using the original two-way wiretap channel as the AD phase. In this paper, we establish an efficient bidirectional SKA whose transmission rate is nearly 1 by modifying the two-way wiretap channel and using the modified two-way wiretap channel as the AD phase. The bidirectional SKA can be extended to multiple rounds of SKA with the same performance and transmission rate. For multiple rounds of bidirectional SKA, we have provided the bit error rate performance of the main channel and eavesdropper’s channel and the secret-key capacity. It is shown that the bit error rate (BER) of the main channel was lower than the eavesdropper’s channel and we prove that the transmission rate was nearly 1 when the number of rounds was large. Moreover, the secret-key capacity C s was from 0 . 04 to 0 . 1 as the error probability of channel was from 0 . 01 to 0 . 15 in binary symmetric channel (BSC). The secret-key capacity was close to 0 . 3 as the signal-to-noise ratio increased in the additive white Gaussian noise (AWGN) channel.

]]>Entropy doi: 10.3390/e19090504

Authors: Shuangshuang Lin Zhigang Liu Keting Hu

In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively indicate the presence of transients and locations in the frequency domain. Wavelet-packet energy Shannon entropy (WPESE) is appropriate for the transient changes detection of complex non-linear and non-stationary signals. Based on the analyses of currents in normal and fault conditions, SK based on CWD and WPESE are combined with the DC component method. SK based on CWD and WPESE are used for the fault detection, and the DC component method is used for the fault localization. This approach can diagnose the specific locations of faulty Insulated Gate Bipolar Transistors (IGBTs) with high accuracy, and it requires no additional devices. Experiments on the RT-LAB platform are carried out and the experimental results verify the feasibility and effectiveness of the diagnosis method.

]]>Entropy doi: 10.3390/e19090501

Authors: Liangjun Yu Liangxiao Jiang Dianhong Wang Lungan Zhang

Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. In previous studies, ODEs were exploited directly in a simple way. For example, averaged one-dependence estimators (AODE) weaken the attribute independence assumption by directly averaging all of a constrained class of classifiers. However, all one-dependence estimators in AODE have the same weights and are treated equally. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). AVWAODE assigns discriminative weights to different ODEs by computing the correlation between the different root attribute value and the class. Our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We experimentally tested them using a collection of 36 University of California at Irvine (UCI) datasets and found that they both achieved better performance than some other state-of-the-art Bayesian classifiers used for comparison.

]]>Entropy doi: 10.3390/e19090502

Authors: Gengxi Zhang Xiaoling Su Vijay P. Singh Olusola O. Ayantobo

Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled variables in the middle reaches of the Hei River basin. Based on the spatial distribution of mutual information, the whole study area was divided into five sub-regions. In each sub-region, nested statistical models were applied to model the NDVI on the grid and regional scales, respectively. Results showed that the annual average NDVI increased at a rate of 0.005/a over the past 11 years. In the desert regions, the NDVI increased significantly with an increase in precipitation and temperature, and a high accuracy of retrieving NDVI model was obtained by coupling precipitation and temperature, especially in sub-region I. In the oasis regions, groundwater was also an important factor driving vegetation growth, and the rise of the groundwater level contributed to the growth of vegetation. However, the relationship was weaker in artificial oasis regions (sub-region III and sub-region V) due to the influence of human activities such as irrigation. The overall correlation coefficient between the observed NDVI and modeled NDVI was observed to be 0.97. The outcomes of this study are suitable for ecosystem monitoring, especially in the realm of climate change. Further studies are necessary and should consider more factors, such as runoff and irrigation.

]]>Entropy doi: 10.3390/e19090497

Authors: Pei Yang Yue Wu Liqiang Jin Hongwen Yang

This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios. Based on the successive interference cancellation (SIC) and the entropy power Gaussian approximation, it is shown that both the multi-user MAC and BC can be approximated to a bank of parallel channels with the channel gains being modified by an extra attenuate factor that equals to the negative exponential of the capacity of interfering users. With this result, the capacity of MAC and BC with arbitrary number of users and arbitrary constellations can be easily calculated which in sharp contrast with using traditional Monte Carlo simulation that the calculating amount increases exponentially with the increase of the number of users. Further, the sum capacity of multi-user under different power allocation strategies including equal power allocation, equal capacity power allocation and maximum capacity power allocation is also investigated. For the equal capacity power allocation, a recursive relation for the solution of power allocation is derived. For the maximum capacity power allocation, the necessary condition for optimal power allocation is obtained and an optimal algorithm for the power allocation optimization problem is proposed based on the necessary condition.

]]>Entropy doi: 10.3390/e19090499

Authors: Eugenio Vogel Patricio Vargas Gonzalo Saravia Julio Valdes Antonio Ramirez-Pastor Paulo Centres

In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called the thermodynamic limit, but not all of the approximations done for those conditions can be extended to all properties in the case of objects with less than a thousand elements. The starting point is the Ising model in two dimensions (2D) where an analytic solution exits, which allows validating the numerical techniques used in the present article. From there on, we introduce several variations bearing in mind the small systems such as the nanoscopic or even subnanoscopic particles, which are nowadays produced for several applications. Magnetization is the main property investigated aimed for two singular possible devices. The size of the systems (number of magnetic sites) is decreased so as to appreciate the departure from the results valid in the thermodynamic limit; periodic boundary conditions are eliminated to approach the reality of small particles; 1D, 2D and 3D systems are examined to appreciate the differences established by dimensionality is this small world; upon diluting the lattices, the effect of coordination number (bonding) is also explored; since the 2D Ising model is equivalent to the clock model with q = 2 degrees of freedom, we combine previous results with the supplementary degrees of freedom coming from the variation of q up to q = 20 . Most of the previous results are numeric; however, for the case of a very small system, we obtain the exact partition function to compare with the conclusions coming from our numerical results. Conclusions can be summarized in the following way: the laws of thermodynamics remain the same, but the interpretation of the results, averages and numerical treatments need special care for systems with less than about a thousand constituents, and this might need to be adapted for different properties or devices.

]]>Entropy doi: 10.3390/e19090500

Authors: Jonathan Schrock Alex McCaskey Kathleen Hamilton Travis Humble Neena Imam

A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recall accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.

]]>Entropy doi: 10.3390/e19090352

Authors: Paulo Rotela Junior Luiz Rocha Giancarlo Aquila Pedro Balestrassi Rogério Peruchi Liviam Lacerda

Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in association with a model resulting from the insertion of the Entropy function directly into the optimization procedure. First, the DEA model was applied to perform a pre-selection of the assets. Then, assets given as efficient were submitted to the proposed model, resulting from the insertion of the Entropy function into the simplified Sharpe’s portfolio optimization model. As a result, an improved asset participation was provided in the portfolio. In the DEA model, several variables were evaluated and a low value of beta was achieved, guaranteeing greater robustness to the portfolio. Entropy function has provided not only greater diversity but also more feasible asset allocation. Additionally, the proposed method has obtained a better portfolio performance, measured by the Sharpe Ratio, in relation to the comparative methods.

]]>Entropy doi: 10.3390/e19090496

Authors: Jerry Gibson Preethi Mahadevan

We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech.

]]>Entropy doi: 10.3390/e19090495

Authors: Subhajit Majhi Patrick Mitran

We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the incumbent microwave band. The mm-wave band has a key modeling feature: due to severe path loss and relatively small wavelength, a transmitter must employ highly directional antenna arrays to reach its desired receiver. This feature causes the mm-wave channels to become highly directional, and thus can be used by a transmitter to transmit to its designated receiver or the other receiver. We consider two classes of such channels, where the underlying GIC in the microwave band has weak and strong interference, and obtain sufficient channel conditions under which the capacity is characterized. Moreover, we assess the impact of the additional mm-wave band spectrum on the performance, by characterizing the transmit power allocation for the direct and cross channels that maximizes the sum-rate of this dual-band channel. The solution reveals conditions under which different power allocations, such as allocating the power budget only to direct or only to cross channels, or sharing it among them, becomes optimal.

]]>Entropy doi: 10.3390/e19090493

Authors: Steeve Zozor David Puertas-Centeno Jesús Dehesa

Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential.

]]>Entropy doi: 10.3390/e19090492

Authors: Bjarne Andresen Christopher Essex

We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we wish to understand we need a set of variables which are linked through closed equations, i.e., everything can meaningfully be described in terms of those variables without the need to investigate other scales. Examples from physics, chemistry, and evolution are presented.

]]>Entropy doi: 10.3390/e19090494

Authors: Michael Wibral Conor Finn Patricia Wollstadt Joseph Lizier Viola Priesemann

Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems.

]]>Entropy doi: 10.3390/e19090489

Authors: Lianrong Zheng Weifeng Pan Yifan Li Daiyi Luo Qian Wang Guanzheng Liu

Obstructive sleep apnea (OSA) is a common sleep disorder that often associates with reduced heart rate variability (HRV) indicating autonomic dysfunction. HRV is mainly composed of high frequency components attributed to parasympathetic activity and low frequency components attributed to sympathetic activity. Although, time domain and frequency domain features of HRV have been used to sleep studies, the complex interaction between nonlinear independent frequency components with OSA is less known. This study included 30 electrocardiogram recordings (20 OSA patient recording and 10 healthy subjects) with apnea or normal label in 1-min segment. All segments were divided into three groups: N-N group (normal segments of normal subjects), P-N group (normal segments of OSA subjects) and P-OSA group (apnea segments of OSA subjects). Frequency domain indices and interaction indices were extracted from segmented RR intervals. Frequency domain indices included nuLF, nuHF, and LF/HF ratio; interaction indices included mutual information (MI) and transfer entropy (TE (H→L) and TE (L→H)). Our results demonstrated that LF/HF ratio was significant higher in P-OSA group than N-N group and P-N group. MI was significantly larger in P-OSA group than P-N group. TE (H→L) and TE (L→H) showed a significant decrease in P-OSA group, compared to P-N group and N-N group. TE (H→L) were significantly negative correlation with LF/HF ratio in P-N group (r = −0.789, p = 0.000) and P-OSA group (r = −0.661, p = 0.002). Our results indicated that MI and TE is powerful tools to evaluate sympathovagal modulation in OSA. Moreover, sympathovagal modulation is more imbalance in OSA patients while suffering from apnea event compared to free event.

]]>Entropy doi: 10.3390/e19090491

Authors: Linda Senigagliesi Marco Baldi Franco Chiaraluce

We propose and assess an on–off protocol for communication over wireless wiretap channels with security at the physical layer. By taking advantage of suitable cryptographic primitives, the protocol we propose allows two legitimate parties to exchange confidential messages with some chosen level of semantic security against passive eavesdroppers, and without needing either pre-shared secret keys or public keys. The proposed method leverages the noisy and fading nature of the channel and exploits coding and all-or-nothing transforms to achieve the desired level of semantic security. We show that the use of fake packets in place of skipped transmissions during low channel quality periods yields significant advantages in terms of time needed to complete transmission of a secret message. Numerical examples are provided considering coding and modulation schemes included in the WiMax standard, thus showing that the proposed approach is feasible even with existing practical devices.

]]>Entropy doi: 10.3390/e19090488

Authors: Mohit Kumar Ram Pachori U. Acharya

Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG. Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method is proposed in this work. First, the segmentation of ECG signals into beats is performed. Then, FAWT is applied to each ECG beat, which decomposes them into subband signals. Sample entropy (SEnt) is computed from these subband signals and fed to the random forest (RF), J48 decision tree, back propagation neural network (BPNN), and least-squares support vector machine (LS-SVM) classifiers to choose the highest performing one. We have achieved highest classification accuracy of 99.31% using LS-SVM classifier. We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance. The proposed automated method can be installed in the intensive care units (ICUs) of hospitals to aid the clinicians in confirming their diagnosis.

]]>Entropy doi: 10.3390/e19090484

Authors: Anming Dong Haixia Zhang Minglei Shu Dongfeng Yuan

This paper considers power splitting (PS)-based simultaneous wireless information and power transfer (SWIPT) for multiple-input multiple-output (MIMO) interference channel networks where multiple transceiver pairs share the same frequency spectrum. As the PS model is adopted, an individual receiver splits the received signal into two parts for information decoding (ID) and energy harvesting (EH), respectively. Aiming to minimize the total transmit power, transmit precoders, receive filters and PS ratios are jointly designed under a predefined signal-to-interference-plus-noise ratio (SINR) and EH constraints. The formulated joint transceiver design and power splitting problem is non-convex and thus difficult to solve directly. In order to effectively obtain its solution, the feasibility conditions of the formulated non-convex problem are first analyzed. Based on the analysis, an iterative algorithm is proposed by alternatively optimizing the transmitters together with the power splitting factors and the receivers based on semidefinite programming (SDP) relaxation. Moreover, considering the prohibitive computational cost of the SDP for practical applications, a low-complexity suboptimal scheme is proposed by separately designing interference-suppressing transceivers based on interference alignment (IA) and optimizing the transmit power allocation together with splitting factors. The transmit power allocation and receive power splitting problem is then recast as a convex optimization problem and solved efficiently. To further reduce the computational complexity, a low-complexity scheme is proposed by calculating the transmit power allocation and receive PS ratios in closed-form. Simulation results show the effectiveness of the proposed schemes in achieving SWIPT for MIMO interference channel (IC) networks.

]]>Entropy doi: 10.3390/e19090479

Authors: Arnab Ghosh

In the spirit of Bose–Einstein condensation, we present a detailed account of the statistical description of the condensation phenomena for a Fermi–Dirac gas following the works of Born and Kothari. For bosons, while the condensed phase below a certain critical temperature, permits macroscopic occupation at the lowest energy single particle state, for fermions, due to Pauli exclusion principle, the condensed phase occurs only in the form of a single occupancy dense modes at the highest energy state. In spite of these rudimentary differences, our recent findings [Ghosh and Ray, 2017] identify the foregoing phenomenon as condensation-like coherence among fermions in an analogous way to Bose–Einstein condensate which is collectively described by a coherent matter wave. To reach the above conclusion, we employ the close relationship between the statistical methods of bosonic and fermionic fields pioneered by Cahill and Glauber. In addition to our previous results, we described in this mini-review that the highest momentum (energy) for individual fermions, prerequisite for the condensation process, can be specified in terms of the natural length and energy scales of the problem. The existence of such condensed phases, which are of obvious significance in the context of elementary particles, have also been scrutinized.

]]>Entropy doi: 10.3390/e19090485

Authors: Joseph Smiga Jacob Taylor

The simplest cosmology—the Friedmann–Robertson–Walker–Lemaître (FRW) model— describes a spatially homogeneous and isotropic universe where the scale factor is the only dynamical parameter. Here we consider how quantized electromagnetic fields become entangled with the scale factor in a toy version of the FRW model. A system consisting of a photon, source, and detector is described in such a universe, and we find that the detection of a redshifted photon by the detector system constrains possible scale factor superpositions. Thus, measuring the redshift of the photon is equivalent to a weak measurement of the underlying cosmology. We also consider a potential optomechanical analogy system that would enable experimental exploration of these concepts. The analogy focuses on the effects of photon redshift measurement as a quantum back-action on metric variables, where the position of a movable mirror plays the role of the scale factor. By working in the rotating frame, an effective Hubble equation can be simulated with a simple free moving mirror.

]]>Entropy doi: 10.3390/e19090486

Authors: José-Luis Muñoz-Cobo Rafael Mendizábal Arturo Miquel Cesar Berna Alberto Escrivá

The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model parameters is the application of methods based on the maximum entropy principle (MEP) and the maximum relative entropy (MREP). These methods determine the PDF that maximizes the information entropy when only partial information about the parameter distribution is known, such as some moments of the distribution and its support. In addition, this paper shows the application of the MREP to update the PDF when the parameter must fulfill some technical specifications (TS) imposed by the regulations. Three computer programs have been developed: GEDIPA, which provides the parameter PDF using empirical distribution function (EDF) methods; UNTHERCO, which performs the Monte Carlo sampling on the parameter distribution; and DCP, which updates the PDF considering the TS and the MREP. Finally, the paper displays several applications and examples for the determination of the PDF applying the MEP and the MREP, and the influence of several factors on the PDF.

]]>Entropy doi: 10.3390/e19090483

Authors: Isabella Gallino

In contrast to pure metals and most non-glass forming alloys, metallic glass-formers are moderately strong liquids in terms of fragility. The notion of fragility of an undercooling liquid reflects the sensitivity of the viscosity of the liquid to temperature changes and describes the degree of departure of the liquid kinetics from the Arrhenius equation. In general, the fragility of metallic glass-formers increases with the complexity of the alloy with differences between the alloy families, e.g., Pd-based alloys being more fragile than Zr-based alloys, which are more fragile than Mg-based alloys. Here, experimental data are assessed for 15 bulk metallic glasses-formers including the novel and technologically important systems based on Ni-Cr-Nb-P-B, Fe-Mo-Ni-Cr-P-C-B, and Au-Ag-Pd-Cu-Si. The data for the equilibrium viscosity are analyzed using the Vogel–Fulcher–Tammann (VFT) equation, the Mauro–Yue–Ellison–Gupta–Allan (MYEGA) equation, and the Adam–Gibbs approach based on specific heat capacity data. An overall larger trend of the excess specific heat for the more fragile supercooled liquids is experimentally observed than for the stronger liquids. Moreover, the stronger the glass, the higher the free enthalpy barrier to cooperative rearrangements is, suggesting the same microscopic origin and rigorously connecting the kinetic and thermodynamic aspects of fragility.

]]>Entropy doi: 10.3390/e19090480

Authors: Andrea Grigorescu Holger Boche Rafael Schaefer

Robust biometric authentication is studied from an information theoretic perspective. Compound sources are used to account for uncertainty in the knowledge of the source statistics and are further used to model certain attack classes. It is shown that authentication is robust against source uncertainty and a special class of attacks under the strong secrecy condition. A single-letter characterization of the privacy secrecy capacity region is derived for the generated and chosen secret key model. Furthermore, the question is studied whether small variations of the compound source lead to large losses of the privacy secrecy capacity region. It is shown that biometric authentication is robust in the sense that its privacy secrecy capacity region depends continuously on the compound source.

]]>Entropy doi: 10.3390/e19090482

Authors: Hyenkyun Woo

In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems.

]]>Entropy doi: 10.3390/e19090481

Authors: Muhammad Bhatti Mohsen Sheikholeslami Ahmed Zeeshan

A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in the Cartesian coordinate system. The effects of viscous dissipation, heat absorption, magnetic field, and electrokinetic body force are also taken into account. The electric field terms are helpful to model the electrical potential terms by means of Poisson–Boltzmann equations, ionic Nernst–Planck equation, and Debye length approximation. A perturbation method has been applied to solve the coupled nonlinear partial differential equations and a series solution is obtained up to second order. The physical behavior of all the governing parameters is discussed for pressure rise, velocity profile, entropy profile, and temperature profile.

]]>Entropy doi: 10.3390/e19090478

Authors: Feifan Zhang Huayong Zhang Tousheng Huang Tianxiang Meng Shengnan Ma

Wind-induced vegetation patterns were proposed a long time ago but only recently a dynamic vegetation-sand relationship has been established. In this research, we transformed the continuous vegetation-sand model into a discrete model. Fixed points and stability analyses were then studied. Bifurcation analyses are done around the fixed point, including Neimark-Sacker and Turing bifurcation. Then we simulated the parameter space for both bifurcations. Based on the bifurcation conditions, simulations are carried out around the bifurcation point. Simulation results showed that Neimark-Sacker bifurcation and Turing bifurcation can induce the self-organization of complex vegetation patterns, among which labyrinth and striped patterns are the key results that can be presented by the continuous model. Under the coupled effects of the two bifurcations, simulation results show that vegetation patterns can also be self-organized, but vegetation type changed. The type of the patterns can be Turing type, Neimark-Sacker type, or some other special type. The difference may depend on the relative intensity of each bifurcation. The calculation of entropy may help understand the variance of pattern types.

]]>Entropy doi: 10.3390/e19090442

Authors: George Thomas Manik Banik Sibasish Ghosh

We study coupled quantum systems as the working media of thermodynamic machines. Under a suitable phase-space transformation, the coupled systems can be expressed as a composition of independent subsystems. We find that for the coupled systems, the figures of merit, that is the efficiency for engine and the coefficient of performance for refrigerator, are bounded (both from above and from below) by the corresponding figures of merit of the independent subsystems. We also show that the optimum work extractable from a coupled system is upper bounded by the optimum work obtained from the uncoupled system, thereby showing that the quantum correlations do not help in optimal work extraction. Further, we study two explicit examples; coupled spin- 1 / 2 systems and coupled quantum oscillators with analogous interactions. Interestingly, for particular kind of interactions, the efficiency of the coupled oscillators outperforms that of the coupled spin- 1 / 2 systems when they work as heat engines. However, for the same interaction, the coefficient of performance behaves in a reverse manner, while the systems work as the refrigerator. Thus, the same coupling can cause opposite effects in the figures of merit of heat engine and refrigerator.

]]>Entropy doi: 10.3390/e19090477

Authors: Linh Ma Jaehyung Park Jiseung Nam HoYong Ryu Jinsul Kim

Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming fluctuates along with the network changes, and it might reduce the quality of service. In recent years, many researchers have proposed several adaptive streaming algorithms to reduce such changes. However, these algorithms only consider the current state of a network. Thus, these algorithms might result in inaccurate estimates of a video quality in the near term. Therefore, in this paper, we propose a method using fuzzy logic and a mathematics moving average technique, in order to reduce mobile video quality fluctuation in Dynamic Adaptive Streaming over HTTP (DASH). First, we calculate the moving average of the bandwidth and buffer values for a given period. On the basis of differences between real and average values, we propose a fuzzy logic system to deduce the value of the video quality representation for the next request. In addition, we use the entropy rate of a bandwidth measurement sequence to measure the predictable/stabilization of our method. The experiment results show that our proposed method reduces video quality fluctuation as well as improves 40% of bandwidth utilization compared to existing methods.

]]>Entropy doi: 10.3390/e19090235

Authors: Emmanuel Abbe Jiange Li Mokshay Madiman

In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any real number additively, but not too much multiplicatively; on the other hand, for Z / 3 Z , the entropy of the difference is always at least as large as that of the sum. These results are closely related to the study of more-sums-than-differences (i.e., MSTD) sets in additive combinatorics. We also investigate polar codes for q-ary input channels using non-canonical kernels to construct the generator matrix and present applications of our results to constructing polar codes with significantly improved error probability compared to the canonical construction.

]]>Entropy doi: 10.3390/e19090475

Authors: Francesco Villecco Arcangelo Pellegrino

In this paper, the problem of the evaluation of the uncertainties that originate in the complex design process of a new system is analyzed, paying particular attention to multibody mechanical systems. To this end, the Wiener-Shannon’s axioms are extended to non-probabilistic events and a theory of information for non-repetitive events is used as a measure of the reliability of data. The selection of the solutions consistent with the values of the design constraints is performed by analyzing the complexity of the relation matrix and using the idea of information in the metric space. Comparing the alternatives in terms of the amount of entropy resulting from the various distribution, this method is capable of finding the optimal solution that can be obtained with the available resources. In the paper, the algorithmic steps of the proposed method are discussed and an illustrative numerical example is provided.

]]>Entropy doi: 10.3390/e19090474

Authors: Tycho Tax Pedro Mediano Murray Shanahan

In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations.

]]>Entropy doi: 10.3390/e19090473

Authors: Mario Ciampini Paolo Mataloni Mauro Paternostro

Quantum networks are natural scenarios for the communication of information among distributed parties, and the arena of promising schemes for distributed quantum computation. Measurement-based quantum computing is a prominent example of how quantum networking, embodied by the generation of a special class of multipartite states called cluster states, can be used to achieve a powerful paradigm for quantum information processing. Here we analyze randomly generated cluster states in order to address the emergence of correlations as a function of the density of edges in a given underlying graph. We find that the most widespread multipartite entanglement does not correspond to the highest amount of edges in the cluster. We extend the analysis to higher dimensions, finding similar results, which suggest the establishment of small world structures in the entanglement sharing of randomised cluster states, which can be exploited in engineering more efficient quantum information carriers.

]]>Entropy doi: 10.3390/e19090469

Authors: Alok Maity Pinaki Chaudhury Suman Banik

Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals cross-talk is responsible for developing coordinated fluctuations between the pathways. The extent of correlation evaluated in terms of the information theoretic measure provides directionality to net information propagation. Stochastic time series suggest that the cross-talk generates synchronisation in a cell. In addition, the cross-interaction develops correlation between two different phosphorylated kinases expressed in each of the cells in a population of genetically identical cells. Depending on the number of inputs and outputs, we identify signal integration and signal bifurcation motif that arise due to inter-pathway connectivity in the composite network. Analysis using partial information decomposition, an extended formalism of multivariate information calculation, also quantifies the net synergy in the information propagation through the branched pathways. Under this formalism, signature of synergy or redundancy is observed due to the architectural difference in the branched pathways.

]]>Entropy doi: 10.3390/e19090472

Authors: Feng Chen Yi Gao Michael Galperin

Recent developments in nanoscale experimental techniques made it possible to utilize single molecule junctions as devices for electronics and energy transfer with quantum coherence playing an important role in their thermoelectric characteristics. Theoretical studies on the efficiency of nanoscale devices usually employ rate (Pauli) equations, which do not account for quantum coherence. Therefore, the question whether quantum coherence could improve the efficiency of a molecular device cannot be fully addressed within such considerations. Here, we employ a nonequilibrium Green function approach to study the effects of quantum coherence and dephasing on the thermoelectric performance of molecular heat engines. Within a generic bichromophoric donor-bridge-acceptor junction model, we show that quantum coherence may increase efficiency compared to quasi-classical (rate equation) predictions and that pure dephasing and dissipation destroy this effect.

]]>Entropy doi: 10.3390/e19090471

Authors: Yiming Fan Ling-Li Zeng Hui Shen Jian Qin Fuquan Li Dewen Hu

Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific maturational patterns. Normal aging is related to some resting state brain networks disruption, which are associated with certain cognitive decline. It is a big challenge to design an integral metric to track connectome evolution patterns across the lifespan, which is to understand the principles of network organization in the human brain. In this study, we first defined a brain network eigen-entropy (NEE) based on the energy probability (EP) of each brain node. Next, we used the NEE to characterize the lifespan orderness trajectory of the whole-brain functional connectivity of 173 healthy individuals ranging in age from 7 to 85 years. The results revealed that during the lifespan, the whole-brain NEE exhibited a significant non-linear decrease and that the EP distribution shifted from concentration to wide dispersion, implying orderness enhancement of functional connectome over age. Furthermore, brain regions with significant EP changes from the flourishing (7–20 years) to the youth period (23–38 years) were mainly located in the right prefrontal cortex and basal ganglia, and were involved in emotion regulation and executive function in coordination with the action of the sensory system, implying that self-awareness and voluntary control performance significantly changed during neurodevelopment. However, the changes from the youth period to middle age (40–59 years) were located in the mesial temporal lobe and caudate, which are associated with long-term memory, implying that the memory of the human brain begins to decline with age during this period. Overall, the findings suggested that the human connectome shifted from a relatively anatomical driven state to an orderly organized state with lower entropy.

]]>Entropy doi: 10.3390/e19090470

Authors: Yan Jin Juan Du Zhiyuan Li Hongwu Zhang

Several fundamental concepts with respect to the second-law analysis (SLA) of the turbulent flows in gas turbines are discussed in this study. Entropy and exergy equations for compressible/incompressible flows in a rotating/non-rotating frame have been derived. The exergy transformation efficiency of a gas turbine as well as the exergy transformation number for a single process step have been proposed. The exergy transformation number will indicate the overall performance of a single process in a gas turbine, including the local irreversible losses in it and its contribution to the exergy obtained the combustion chamber. A more general formula for calculating local entropy generation rate densities is suggested. A test case of a compressor cascade has been employed to demonstrate the application of the developed concepts.

]]>Entropy doi: 10.3390/e19090468

Authors: Chol Kang Michelangelo Naim Vezha Boboeva Alessandro Treves

We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching in the phase space spanned by the number of Potts states S, the number of connections per Potts unit C and the number of stored memory patterns p. We find narrow regions, or bands in phase space, where distinct pattern retrieval and duration of latching combine to yield the highest values of Q. The bands are confined by the storage capacity curve, for large p, and by the onset of finite latching, for low p. Inside the band, in the slowly adapting regime, we observe complex structured dynamics, with transitions at high crossover between correlated memory patterns; while away from the band latching, transitions lose complexity in different ways: below, they are clear-cut but last such few steps as to span a transition matrix between states with few asymmetrical entries and limited entropy; while above, they tend to become random, with large entropy and bi-directional transition frequencies, but indistinguishable from noise. Extrapolating from the simulations, the band appears to scale almost quadratically in the p–S plane, and sublinearly in p–C. In the fast adapting regime, the band scales similarly, and it can be made even wider and more robust, but transitions between anti-correlated patterns dominate latching dynamics. This suggest that slow and fast adaptation have to be integrated in a scenario for viable latching in a cortical system. The results for the slowly adapting regime, obtained with randomly correlated patterns, remain valid also for the case with correlated patterns, with just a simple shift in phase space.

]]>Entropy doi: 10.3390/e19090467

Authors: Avihay Sadeh-Shirazi Uria Basher Haim Permuter

Let ( S 1 , i , S 2 , i ) ∼ i . i . d p ( s 1 , s 2 ) , i = 1 , 2 , ⋯ be a memoryless, correlated partial side information sequence. In this work, we study channel coding and source coding problems where the partial side information ( S 1 , S 2 ) is available at the encoder and the decoder, respectively, and, additionally, either the encoder’s or the decoder’s side information is increased by a limited-rate description of the other’s partial side information. We derive six special cases of channel coding and source coding problems and we characterize the capacity and the rate-distortion functions for the different cases. We present a duality between the channel capacity and the rate-distortion cases we study. In order to find numerical solutions for our channel capacity and rate-distortion problems, we use the Blahut-Arimoto algorithm and convex optimization tools. Finally, we provide several examples corresponding to the channel capacity and the rate-distortion cases we presented.

]]>Entropy doi: 10.3390/e19090465

Authors: Maria Bertotti Giovanni Modanese

Why does the Maxwell-Boltzmann energy distribution for an ideal classical gas have an exponentially thin tail at high energies, while the Kaniadakis distribution for a relativistic gas has a power-law fat tail? We argue that a crucial role is played by the kinematics of the binary collisions. In the classical case the probability of an energy exchange far from the average (i.e., close to 0% or 100%) is quite large, while in the extreme relativistic case it is small. We compare these properties with the concept of “saving propensity”, employed in econophysics to define the fraction of their money that individuals put at stake in economic interactions.

]]>Entropy doi: 10.3390/e19090466

Authors: Kab-Mun Cha Nitish Thakor Hyun-Chool Shin

In this paper, we propose novel quantitative electroencephalogram (qEEG) measures by exploiting three critical and distinct phases (isoelectric, fast progression, and slow progression) of qEEG time evolution. Critical time points where the phase transition occurs are calculated. Most conventional measures have two major disadvantages. Firstly, to obtain meaningful time-evolution over raw electroencephalogram (EEG), these measures require baseline EEG activities before the subject’s injury. Secondly, conventional qEEG measures need at least 2∼3 h recording of EEG signals to predict meaningful long-term neurological outcomes. Unlike the conventional qEEG measures, the two measures do not require the baseline EEG information before injury and furthermore can be calculated only with the EEG data of 20∼30 min after cardiopulmonary resuscitation (CPR).

]]>Entropy doi: 10.3390/e19090464

Authors: Ibai Mugica Steven Roy Sébastien Poncet Jonathan Bouchard Hakim Nesreddine

This paper analyzes the energetic and exergy performance of an active magnetic regenerative refrigerator using water-based Al2O3 nanofluids as heat transfer fluids. A 1D numerical model has been extensively used to quantify the exergy performance of a system composed of a parallel-plate regenerator, magnetic source, pump, heat exchangers and control valves. Al2O3-water based nanofluids are tested thanks to CoolProp library, accounting for temperature-dependent properties, and appropriate correlations. The results are discussed in terms of the coefficient of performance, the exergy efficiency, and the cooling power as a function of the nanoparticle volume fraction and blowing time for a given geometrical configuration. It is shown that while the heat transfer between the fluid and solid is enhanced, it is accompanied by smaller temperature gradients within the fluid and larger pressure drops when increasing the nanoparticle concentration. It leads in all configurations to lower performance compared to the base case with pure liquid water.

]]>Entropy doi: 10.3390/e19090463

Authors: Renat Sibatov Vadim Shulezhko Vyacheslav Svetukhin

Anomalous advection-diffusion in two-dimensional semiconductor systems with coexisting energetic and structural disorder is described in the framework of a generalized model of multiple trapping on a comb-like structure. The basic equations of the model contain fractional-order derivatives. To validate the model, we compare analytical solutions with results of a Monte Carlo simulation of phonon-assisted tunneling in two-dimensional patterns of a porous nanoparticle agglomerate and a phase-separated bulk heterojunction. To elucidate the role of directed percolation, we calculate transient current curves of the time-of-flight experiment and the evolution of the mean squared displacement averaged over medium realizations. The variations of the anomalous advection-diffusion parameters as functions of electric field intensity, levels of energetic, and structural disorder are presented.

]]>Entropy doi: 10.3390/e19090462

Authors: Shuai Chang Jialun Li Xiaomei Fu Liang Zhang

Energy harvesting (EH) has attracted a lot of attention in cooperative communication networks studies for its capability of transferring energy from sources to relays. In this paper, we study the secrecy capacity of a cooperative compressed sensing amplify and forward (CCS-AF) wireless network in the presence of eavesdroppers based on an energy harvesting protocol. In this model, the source nodes send their information to the relays simultaneously, and then the relays perform EH from the received radio-frequency signals based on the power splitting-based relaying (PSR) protocol. The energy harvested by the relays will be used to amplify and forward the received information to the destination. The impacts of some key parameters, such as the power splitting ratio, energy conversion efficiency, relay location, and the number of relays, on the system secrecy capacity are analyzed through a group of experiments. Simulation results reveal that under certain conditions, the proposed EH relaying scheme can achieve higher secrecy capacity than traditional relaying strategies while consuming equal or even less power.

]]>Entropy doi: 10.3390/e19090461

Authors: Alyssa Adams Angelica Berner Paul Davies Sara Walker

A major conceptual step forward in understanding the logical architecture of living systems was advanced by von Neumann with his universal constructor, a physical device capable of self-reproduction. A necessary condition for a universal constructor to exist is that the laws of physics permit physical universality, such that any transformation (consistent with the laws of physics and availability of resources) can be caused to occur. While physical universality has been demonstrated in simple cellular automata models, so far these have not displayed a requisite feature of life—namely open-ended evolution—the explanation of which was also a prime motivator in von Neumann’s formulation of a universal constructor. Current examples of physical universality rely on reversible dynamical laws, whereas it is well-known that living processes are dissipative. Here we show that physical universality and open-ended dynamics should both be possible in irreversible dynamical systems if one entertains the possibility of state-dependent laws. We demonstrate with simple toy models how the accessibility of state space can yield open-ended trajectories, defined as trajectories that do not repeat within the expected Poincaré recurrence time and are not reproducible by an isolated system. We discuss implications for physical universality, or an approximation to it, as a foundational framework for developing a physics for life.

]]>Entropy doi: 10.3390/e19090430

Authors: Tudorel Andrei Bogdan Oancea Peter Richmond Gurjeet Dhesi Claudiu Herteliu

This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens.

]]>Entropy doi: 10.3390/e19090460

Authors: Luis Estrada Abel Torres Leonardo Sarlabous Raimon Jané

Fixed sample entropy (fSampEn) is a robust technique that allows the evaluation of inspiratory effort in diaphragm electromyography (EMGdi) signals, and has potential utility in sleep studies. To appropriately estimate respiratory effort, fSampEn requires the adjustment of several parameters. The aims of the present study were to evaluate the influence of the embedding dimension m, the tolerance value r, the size of the moving window, and the sampling frequency, and to establish recommendations for estimating the respiratory activity when using the fSampEn on surface EMGdi recorded for different inspiratory efforts. Values of m equal to 1 and r ranging from 0.1 to 0.64, and m equal to 2 and r ranging from 0.13 to 0.45, were found to be suitable for evaluating respiratory activity. fSampEn was less affected by window size than classical amplitude parameters. Finally, variations in sampling frequency could influence fSampEn results. In conclusion, the findings suggest the potential utility of fSampEn for estimating muscle respiratory effort in further sleep studies.

]]>Entropy doi: 10.3390/e19090456

Authors: Keyan Ghazi-Zahedi Carlotta Langer Nihat Ay

There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that would otherwise have to be conducted by the brain. In this paper, we argue for a synergistic perspective, and by that we mean that Morphological Computation is a process which requires a close interaction of body and brain. Based on a model of the sensorimotor loop, we study a new measure of synergistic information and show that it is more reliable in cases in which there is no synergistic information, compared to previous results. Furthermore, we discuss an algorithm that allows the calculation of the measure in non-trivial (non-binary) systems.

]]>Entropy doi: 10.3390/e19090457

Authors: Constantino Tsallis

The Boltzmann–Gibbs (BG) entropy and its associated statistical mechanics were generalized, three decades ago, on the basis of the nonadditive entropy S q ( q ∈ R ), which recovers the BG entropy in the q → 1 limit. The optimization of S q under appropriate simple constraints straightforwardly yields the so-called q-exponential and q-Gaussian distributions, respectively generalizing the exponential and Gaussian ones, recovered for q = 1 . These generalized functions ubiquitously emerge in complex systems, especially as economic and financial stylized features. These include price returns and volumes distributions, inter-occurrence times, characterization of wealth distributions and associated inequalities, among others. Here, we briefly review the basic concepts of this q-statistical generalization and focus on its rapidly growing applications in economics and finance.

]]>Entropy doi: 10.3390/e19090459

Authors: Francisco De Sousa Lima

Through Monte Carlo simulations, we studied the critical properties of kinetic models of continuous opinion dynamics on ( 3 , 4 , 6 , 4 ) and ( 3 4 , 6 ) Archimedean lattices. We obtain p c and the critical exponents’ ratio from extensive Monte Carlo studies and finite size scaling. The calculated values of the critical points and Binder cumulant are p c = 0 . 085 ( 6 ) and O 4 * = 0 . 605 ( 9 ) ; and p c = 0 . 146 ( 5 ) and O 4 * = 0 . 606 ( 3 ) for ( 3 , 4 , 6 , 4 ) and ( 3 4 , 6 ) lattices, respectively, while the exponent ratios β / ν , γ / ν and 1 / ν are, respectively: 0 . 126 ( 1 ) , 1 . 50 ( 7 ) , and 0 . 90 ( 5 ) for ( 3 , 4 , 6 , 4 ); and 0 . 125 ( 3 ) , 1 . 54 ( 6 ) , and 0 . 99 ( 3 ) for ( 3 4 , 6 ) lattices. Our new results agree with majority-vote model on previously studied regular lattices and disagree with the Ising model on square-lattice.

]]>Entropy doi: 10.3390/e19090458

Authors: Fabrizio Tamburini Mariafelicia Laurentis Ignazio Licata Bo Thidé

Background: The Hawking–Perry–Strominger (HPS) work states a new controversial idea about the black hole (BH) information paradox , where BHs maximally entropize and encode information in their event horizon area , with no “hair” thought to reveal information outside but angular momentum, mass, and electric charge only in a unique quantum gravity (QG) vacuum state. New conservation laws of gravitation and electromagnetism , appear to generate different QG vacua, preserving more information in soft photon/graviton hair implants. We find that BH photon hair implants can encode orbital angular momentum (OAM) and vorticity of the electromagnetic (EM) field. Methods: Numerical simulations are used to plot an EM field with OAM emitted by a set of dipolar currents together with the soft photon field they induce. The analytical results confirm that the soft photon hair implant carries OAM and vorticity. Results: a set of charges and currents generating real EM fields with precise values of OAM induce a “curly”, twisted, soft-hair implant on the BH with vorticity and OAM increased by one unit with respect to the initial real field. Conclusions: Soft photon implants can be spatially shaped ad hoc, encoding structured and densely organized information on the event horizon.

]]>Entropy doi: 10.3390/e19090455

Authors: Martin Tamm

In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws) and weak versions (which can be compatible with time symmetric microscopic laws) are given. This is part of a larger project that aims to explain the arrows as consequences of a common time symmetric principle in the set of all possible universes. However, even if we accept that both arrows may have the same origin, this does not imply that they are equivalent, and it is argued that there can be situations where one arrow may be well-defined but the other is not.

]]>Entropy doi: 10.3390/e19090443

Authors: Yikun Wei Zhengdao Wang Yuehong Qian

Entropy generation in two-dimensional Rayleigh-Bénard convection at different Prandtl number (Pr) are investigated in the present paper by using the lattice Boltzmann Method. The major concern of the present paper is to explore the effects of Pr on the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field. The results of this work indicate that the significant viscous entropy generation rates (Su) gradually expand to bulk contributions of cavity with the increase of Pr, thermal entropy generation rates (Sθ) and total entropy generation rates (S) mainly concentrate in the steepest temperature gradient, the entropy generation in the flow is dominated by heat transfer irreversibility and for the same Rayleigh number, the amplitudes of Su, Sθ and S decrease with increasing Pr. It is found that that the amplitudes of the horizontally averaged viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates decrease with increasing Pr. The probability density functions of Su, Sθ and S also indicate that a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr. The distribution and the departure from log-normality become robust with decreasing Pr.

]]>Entropy doi: 10.3390/e19090454

Authors: Jung Lee Jaekyum Kim Byeoungdo Kim Dongweon Yoon Jun Choi

In this paper, we propose a deep neural network (DNN)-based automatic modulation classification (AMC) for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN) channels, classification accuracy degrades for fading channels where the amplitude and phase of channel gain change in time. The key contributions of this paper are in two phases. First, we analyze the effectiveness of a variety of statistical features for AMC task in fading channels. We reveal that the features that are shown to be effective for fading channels are different from those known to be good for AWGN channels. Second, we introduce a new enhanced AMC technique based on DNN method. We use the extensive and diverse set of statistical features found in our study for the DNN-based classifier. The fully connected feedforward network with four hidden layers are trained to classify the modulation class for several fading scenarios. Numerical evaluation shows that the proposed technique offers significant performance gain over the existing AMC methods in fading channels.

]]>Entropy doi: 10.3390/e19090453

Authors: Markus Lips

This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN), including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs.

]]>Entropy doi: 10.3390/e19090452

Authors: Serkan Akogul Murat Erisoglu

To determine the number of clusters in the clustering analysis that has a broad range of applied sciences, such as physics, chemistry, biology, engineering, economics etc., many methods have been proposed in the literature. The aim of this paper is to determine the number of clusters of a dataset in a model-based clustering by using an Analytic Hierarchy Process (AHP). In this study, the AHP model has been created by using the information criteria Akaike’s Information Criterion (AIC), Approximate Weight of Evidence (AWE), Bayesian Information Criterion (BIC), Classification Likelihood Criterion (CLC), and Kullback Information Criterion (KIC). The achievement of the proposed approach has been tested on common real and synthetic datasets. The proposed approach based on the corresponding information criteria has produced accurate results. The currently produced results have been seen to be more accurate than those corresponding to the information criteria.

]]>Entropy doi: 10.3390/e19090451

Authors: Giuseppe Pica Eugenio Piasini Daniel Chicharro Stefano Panzeri

In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems. Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to describe the relationships among all PID lattices. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the system’s entropy, hence it characterizes completely all the interdependencies in the system.

]]>Entropy doi: 10.3390/e19090450

Authors: Joo Ho Choi Un Gu Kang Byung Mun Lee

There has been a growing interest in sleep management recently, and sleep care services using mobile or wearable devices are under development. However, devices with one sensor have limitations in analyzing various sleep states. If Internet of Things (IoT) technology, which collects information from multiple sensors and analyzes them in an integrated manner, can be used then various sleep states can be more accurately measured. Therefore, in this paper, we propose a Smart Model for Sleep Care to provide a service to measure and analyze the sleep state using various sensors. In this model, we designed and implemented a Sleep Information Gathering Protocol to transmit the information measured between physical sensors and sleep sensors. Experiments were conducted to compare the throughput and the consumed power of this new protocol with those of the protocols used in the existing service—we achieved the throughput of about two times and 20% reduction in power consumption, which has confirmed the effectiveness of the proposed protocol. We judge that this protocol is meaningful as it can be applied to a Smart Model for Sleep Care that incorporates IoT technology and allows expanded sleep care if used together with services for treating sleep disorders.

]]>Entropy doi: 10.3390/e19090449

Authors: Shingo Kukita Yasusada Nambu

We consider entanglement harvesting in de Sitter space using a model of multiple qubit detectors. We obtain the formula of the entanglement negativity for this system. Applying the obtained formula, we find that it is possible to access to the entanglement on the super horizon scale if a sufficiently large number of detectors are prepared. This result indicates the effect of the multipartite entanglement is crucial for detection of large scale entanglement in de Sitter space.

]]>