Entropy doi: 10.3390/e20020131

Authors: Lucas Lacasa Bartolome Luque Ignacio Gómez Octavio Miramontes

We show how the cross-disciplinary transfer of techniques from dynamical systems theory to number theory can be a fruitful avenue for research. We illustrate this idea by exploring from a nonlinear and symbolic dynamics viewpoint certain patterns emerging in some residue sequences generated from the prime number sequence. We show that the sequence formed by the residues of the primes modulo k are maximally chaotic and, while lacking forbidden patterns, unexpectedly display a non-trivial spectrum of Renyi entropies which suggest that every block of size m &gt; 1 , while admissible, occurs with different probability. This non-uniform distribution of blocks for m &gt; 1 contrasts Dirichlet’s theorem that guarantees equiprobability for m = 1 . We then explore in a similar fashion the sequence of prime gap residues. We numerically find that this sequence is again chaotic (positivity of Kolmogorov–Sinai entropy), however chaos is weaker as forbidden patterns emerge for every block of size m &gt; 1 . We relate the onset of these forbidden patterns with the divisibility properties of integers, and estimate the densities of gap block residues via Hardy–Littlewood k-tuple conjecture. We use this estimation to argue that the amount of admissible blocks is non-uniformly distributed, what supports the fact that the spectrum of Renyi entropies is again non-trivial in this case. We complete our analysis by applying the chaos game to these symbolic sequences, and comparing the Iterated Function System (IFS) attractors found for the experimental sequences with appropriate null models.

]]>Entropy doi: 10.3390/e20020130

Authors: Yue Jing Zeyu Li Liming Liu Shengzi Lu

The paper mainly deals with the match of solar refrigeration, i.e., solar/natural gas-driven absorption chiller (SNGDAC), solar vapor compression–absorption integrated refrigeration system with parallel configuration (SVCAIRSPC), and solar absorption-subcooled compression hybrid cooling system (SASCHCS), and building cooling based on the exergoeconomics. Three types of building cooling are considered: Type 1 is the single-story building, type 2 includes the two-story and three-story buildings, and type 3 is the multi-story buildings. Besides this, two Chinese cities, Guangzhou and Turpan, are taken into account as well. The product cost flow rate is employed as the primary decision variable. The result exhibits that SNGDAC is considered as a suitable solution for type 1 buildings in Turpan, owing to its negligible natural gas consumption and lowest product cost flow rate. SVCAIRSPC is more applicable for type 2 buildings in Turpan because of its higher actual cooling capacity of absorption subsystem and lower fuel and product cost flow rate. Additionally, SASCHCS shows the most extensive cost-effectiveness, namely, its exergy destruction and product cost flow rate are both the lowest when used in all types of buildings in Guangzhou or type 3 buildings in Turpan. This paper is helpful to promote the application of solar cooling.

]]>Entropy doi: 10.3390/e20020129

Authors: Dagmar Markechová Batool Mosapour Abolfazl Ebrahimzadeh

In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have been used to define the logical mutual information of experiments in the studied case. In addition, we define the logical cross entropy and logical divergence for the examined situation and prove basic properties of the suggested quantities. To illustrate the results, we provide several numerical examples.

]]>Entropy doi: 10.3390/e20020128

Authors: Ewa Korczak-Kubiak Anna Loranty Ryszard Pawlak

In the paper, we consider local aspects of the entropy of nonautonomous dynamical systems. For this purpose, we introduce the notion of a (asymptotical) focal entropy point. The notion of entropy appeared as a result of practical needs concerning thermodynamics and the problem of information flow, and it is connected with the complexity of a system. The definition adopted in the paper specifies the notions that express the complexity of a system around certain points (the complexity of the system is the same as its complexity around these points), and moreover, the complexity of a system around such points does not depend on the behavior of the system in other parts of its domain. Any periodic system “acting” in the closed unit interval has an asymptotical focal entropy point, which justifies wide interest in these issues. In the paper, we examine the problems of the distortions of a system and the approximation of an autonomous system by a nonautonomous one, in the context of having a (asymptotical) focal entropy point. It is shown that even a slight modification of a system may lead to the arising of the respective focal entropy points.

]]>Entropy doi: 10.3390/e20020127

Authors: Fuyuan Liao Gladys L. Y. Cheing Weiyan Ren Sanjiv Jain Yih-Kuen Jan

Diabetic foot ulcer (DFU) is a common complication of diabetes mellitus, while tissue ischemia caused by impaired vasodilatory response to plantar pressure is thought to be a major factor of the development of DFUs, which has been assessed using various measures of skin blood flow (SBF) in the time or frequency domain. These measures, however, are incapable of characterizing nonlinear dynamics of SBF, which is an indicator of pathologic alterations of microcirculation in the diabetic foot. This study recruited 18 type 2 diabetics with peripheral neuropathy and eight healthy controls. SBF at the first metatarsal head in response to locally applied pressure and heating was measured using laser Doppler flowmetry. A multiscale entropy algorithm was utilized to quantify the regularity degree of the SBF responses. The results showed that during reactive hyperemia and thermally induced biphasic response, the regularity degree of SBF in diabetics underwent only small changes compared to baseline and significantly differed from that in controls at multiple scales (p &lt; 0.05). On the other hand, the transition of regularity degree of SBF in diabetics distinctively differed from that in controls (p &lt; 0.05). These findings indicated that multiscale entropy could provide a more comprehensive assessment of impaired microvascular reactivity in the diabetic foot compared to other entropy measures based on only a single scale, which strengthens the use of plantar SBF dynamics to assess the risk for DFU.

]]>Entropy doi: 10.3390/e20020126

Authors: Luca Bergamasco Matteo Alberghini Matteo Fasano Annalisa Cardellini Eliodoro Chiavazzo Pietro Asinari

In this work, we derive different systems of mesoscopic moment equations for the heat-conduction problem and analyze the basic features that they must hold. We discuss two- and three-equation systems, showing that the resulting mesoscopic equation from two-equation systems is of the telegraphist’s type and complies with the Cattaneo equation in the Extended Irreversible Thermodynamics Framework. The solution of the proposed systems is analyzed, and it is shown that it accounts for two modes: a slow diffusive mode, and a fast advective mode. This latter additional mode makes them suitable for heat transfer phenomena on fast time-scales, such as high-frequency pulses and heat transfer in small-scale devices. We finally show that, if proper initial conditions are provided, the advective mode disappears, and the solution of the system tends asymptotically to the transient solution of the classical parabolic heat-conduction equation.

]]>Entropy doi: 10.3390/e20020125

Authors: Andrea Lamorgese Roberto Mauri

We simulate the diffusion-driven dissolution or growth of a single-component liquid drop embedded in a continuous phase of a binary liquid. Our theoretical approach follows a diffuse-interface model of partially miscible ternary liquid mixtures that incorporates the non-random, two-liquid (NRTL) equation as a submodel for the enthalpic (so-called excess) component of the Gibbs energy of mixing, while its nonlocal part is represented based on a square-gradient (Cahn-Hilliard-type modeling) assumption. The governing equations for this phase-field ternary mixture model are simulated in 2D, showing that, for a single-component drop embedded in a continuous phase of a binary liquid (which is highly miscible with either one component of the continuous phase but is essentially immiscible with the other), the size of the drop can either shrink to zero or reach a stationary value, depending on whether the global composition of the mixture is within the one-phase region or the unstable range of the phase diagram.

]]>Entropy doi: 10.3390/e20020124

Authors: Li Li Zhen Wang Junwei Lu Yuxia Li

In this paper, the synchronization problem of fractional-order complex-valued neural networks with discrete and distributed delays is investigated. Based on the adaptive control and Lyapunov function theory, some sufficient conditions are derived to ensure the states of two fractional-order complex-valued neural networks with discrete and distributed delays achieve complete synchronization rapidly. Finally, numerical simulations are given to illustrate the effectiveness and feasibility of the theoretical results.

]]>Entropy doi: 10.3390/e20020123

Authors: Paweł Bialas Jerzy Łuczka

We consider a paradigmatic model of a quantum Brownian particle coupled to a thermostat consisting of harmonic oscillators. In the framework of a generalized Langevin equation, the memory (damping) kernel is assumed to be in the form of exponentially-decaying oscillations. We discuss a quantum counterpart of the equipartition energy theorem for a free Brownian particle in a thermal equilibrium state. We conclude that the average kinetic energy of the Brownian particle is equal to thermally-averaged kinetic energy per one degree of freedom of oscillators of the environment, additionally averaged over all possible oscillators’ frequencies distributed according to some probability density in which details of the particle-environment interaction are present via the parameters of the damping kernel.

]]>Entropy doi: 10.3390/e20020122

Authors: Fabio Leoni Yair Shokef

We study two-dimensional triangular-network models, which have degenerate ground states composed of straight or randomly-zigzagging stripes and thus sub-extensive residual entropy. We show that attraction is responsible for the inversion of the stable phase by changing the entropy of fluctuations around the ground-state configurations. By using a real-space shell-expansion method, we compute the exact expression of the entropy for harmonic interactions, while for repulsive harmonic interactions we obtain the entropy arising from a limited subset of the system by numerical integration. We compare these results with a three-dimensional triangular-network model, which shows the same attraction-mediated selection mechanism of the stable phase, and conclude that this effect is general with respect to the dimensionality of the system.

]]>Entropy doi: 10.3390/e20020121

Authors: Simon Rabanser Lukas Neumann Markus Haltmeier

The development of accurate and efficient image reconstruction algorithms is a central aspect of quantitative photoacoustic tomography (QPAT). In this paper, we address this issues for multi-source QPAT using the radiative transfer equation (RTE) as accurate model for light transport. The tissue parameters are jointly reconstructed from the acoustical data measured for each of the applied sources. We develop stochastic proximal gradient methods for multi-source QPAT, which are more efficient than standard proximal gradient methods in which a single iterative update has complexity proportional to the number applies sources. Additionally, we introduce a completely new formulation of QPAT as multilinear (MULL) inverse problem which avoids explicitly solving the RTE. The MULL formulation of QPAT is again addressed with stochastic proximal gradient methods. Numerical results for both approaches are presented. Besides the introduction of stochastic proximal gradient algorithms to QPAT, we consider the new MULL formulation of QPAT as main contribution of this paper.

]]>Entropy doi: 10.3390/e20020120

Authors: Jui Fang Ning-Fang Chang Po-Hsiang Tsui

Ultrasound B-mode imaging based on log-compressed envelope data has been widely applied to examine hepatic steatosis. Modeling raw backscattered signals returned from the liver parenchyma by using statistical distributions can provide additional information to assist in hepatic steatosis diagnosis. Since raw data are not always available in modern ultrasound systems, information entropy, which is a widely known nonmodel-based approach, may allow ultrasound backscattering analysis using B-scan for assessing hepatic steatosis. In this study, we explored the feasibility of using ultrasound entropy imaging constructed using log-compressed backscattered envelopes for assessing hepatic steatosis. Different stages of hepatic steatosis were induced in male Wistar rats fed with a methionine- and choline-deficient diet for 0 (i.e., normal control) and 1, 1.5, and 2 weeks (n = 48; 12 rats in each group). In vivo scanning of rat livers was performed using a commercial ultrasound machine (Model 3000, Terason, Burlington, MA, USA) equipped with a 7-MHz linear array transducer (Model 10L5, Terason) for ultrasound B-mode and entropy imaging based on uncompressed (HE image) and log-compressed envelopes (HB image), which were subsequently compared with histopathological examinations. Receiver operating characteristic (ROC) curve analysis and areas under the ROC curves (AUC) were used to assess diagnostic performance levels. The results showed that ultrasound entropy imaging can be used to assess hepatic steatosis. The AUCs obtained from HE imaging for diagnosing different steatosis stages were 0.93 (≥mild), 0.89 (≥moderate), and 0.89 (≥severe), respectively. HB imaging produced AUCs ranging from 0.74 (≥mild) to 0.84 (≥severe) as long as a higher number of bins was used to reconstruct the signal histogram for estimating entropy. The results indicated that entropy use enables ultrasound parametric imaging based on log-compressed envelope signals with great potential for diagnosing hepatic steatosis.

]]>Entropy doi: 10.3390/e20020119

Authors: Gabriel Spanghero Cyro Albuquerque Tiago Lazzaretti Fernandes Arnaldo Hernandez Carlos Keutenedjian Mady

The first and second laws of thermodynamics were applied to the human body in order to evaluate the quality of the energy conversion during muscle activity. Such an implementation represents an important issue in the exergy analysis of the body, because there is a difficulty in the literature in evaluating the performed power in some activities. Hence, to have the performed work as an input in the exergy model, two types of exercises were evaluated: weight lifting and aerobic exercise on a stationary bicycle. To this aim, we performed a study of the aerobic and anaerobic reactions in the muscle cells, aiming at predicting the metabolic efficiency and muscle efficiency during exercises. Physiological data such as oxygen consumption, carbon dioxide production, skin and internal temperatures and performed power were measured. Results indicated that the exergy efficiency was around 4% in the weight lifting, whereas it could reach values as high as 30% for aerobic exercises. It has been shown that the stationary bicycle is a more adequate test for first correlations between exergy and performance indices.

]]>Entropy doi: 10.3390/e20020118

Authors: Carlos Badillo-Ruiz Miguel Olivares-Robles Pablo Ruiz-Ortega

In this work, the influences of the Thomson effect and the geometry of the p-type segmented leg on the performance of a segmented thermoelectric microcooler (STEMC) were examined. The effects of geometry and the material configuration of the p-type segmented leg on the cooling power ( Q c ) and coefficient of performance ( C O P ) were investigated. The influence of the cross-sectional area ratio of the two joined segments on the device performance was also evaluated. We analyzed a one-dimensional p-type segmented leg model composed of two different semiconductor materials, B i 2 T e 3 and ( B i 0.5 S b 0.5 ) 2 T e 3 . Considering the three most common p-type leg geometries, we studied both single-material systems (using the same material for both segments) and segmented systems (using different materials for each segment). The C O P , Q c and temperature profile were evaluated for each of the modeled geometric configurations under a fixed temperature gradient of Δ T = 30 K. The performances of the STEMC were evaluated using two models, namely the constant-properties material (CPM) and temperature-dependent properties material (TDPM) models, considering the thermal conductivity ( κ ( T ) ), electrical conductivity ( σ ( T ) ) and Seebeck coefficient ( α ( T ) ). We considered the influence of the Thomson effect on C O P and Q c using the TDPM model. The results revealed the optimal material configurations for use in each segment of the p-type leg. According to the proposed geometric models, the optimal leg geometry and electrical current for maximum performance were determined. After consideration of the Thomson effect, the STEMC system was found to deliver a maximum cooling power that was 5.10 % higher than that of the single-material system. The results showed that the inverse system (where the material with a higher Seebeck coefficient is used for the first segment) delivered a higher performance than the direct system, with improvements in the C O P and Q c of 6.67 % and 29.25 % , respectively. Finally, analysis of the relationship between the areas of the STEMC segments demonstrated that increasing the cross-sectional area in the second segment led to improvements in the C O P and Q c of 16.67 % and 8.03 % , respectively.

]]>Entropy doi: 10.3390/e20020117

Authors: Lu Chen Vijay Singh Kangdi Huang

Frequency analysis of hydrometeorological extremes plays an important role in the design of hydraulic structures. A multitude of distributions have been employed for hydrological frequency analysis, and more than one distribution is often found to be adequate for frequency analysis. The current method for selecting the best fitted distributions are not so objective. Using different kinds of constraints, entropy theory was employed in this study to derive five generalized distributions for frequency analysis. These distributions are the generalized gamma (GG) distribution, generalized beta distribution of the second kind (GB2), Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B), and Halphen type inverse B (Hal-IB) distribution. The Bayesian technique was employed to objectively select the optimal distribution. The method of selection was tested using simulation as well as using extreme daily and hourly rainfall data from the Mississippi. The results showed that the Bayesian technique was able to select the best fitted distribution, thus providing a new way for model selection for frequency analysis of hydrometeorological extremes.

]]>Entropy doi: 10.3390/e20020116

Authors: Chen Hu Haoshen Lin Zhenhua Li Bing He Gang Liu

In this paper, a distributed Bayesian filter design was studied for nonlinear dynamics and measurement mapping based on Kullback–Leibler divergence. In a distributed structure, the nonlinear filter becomes a challenging problem, since each sensor cannot access the global measurement likelihood function over the whole network, and some sensors have weak observability of the state. To solve the problem in a sensor network, the distributed Bayesian filter problem was converted into an optimization problem by maximizing a posterior method. The global cost function over the whole network was decomposed into the sum of the local cost function, where the local cost function can be solved by each sensor. With the help of the Kullback–Leibler divergence, the global estimate was approximated in each sensor by communicating with its neighbors. Based on the proposed distributed Bayesian filter structure, a distributed cubature Kalman filter (DCKF) was proposed. Finally, a cooperative space object tracking problem was studied for illustration. The simulation results demonstrated that the proposed algorithm can solve the issues of varying communication topology and weak observability of some sensors.

]]>Entropy doi: 10.3390/e20020115

Authors: Balázs Király György Szabó

The effect of entropy at low noises is investigated in five-strategy logit-rule-driven spatial evolutionary potential games exhibiting two-fold or three-fold degenerate ground states. The non-zero elements of the payoff matrix define two subsystems which are equivalent to an Ising or a three-state Potts model depending on whether the players are constrained to use only the first two or the last three strategies. Due to the equivalence of these models to spin systems, we can use the concepts and methods of statistical physics when studying the phase transitions. We argue that the greater entropy content of the Ising phase plays an important role in its stabilization when the magnitude of the Potts component is equal to or slightly greater than the strength of the Ising component. If the noise is increased in these systems, then the presence of the higher entropy state can cause a kind of social dilemma in which the players’ average income is reduced in the stable Ising phase following a first-order phase transition.

]]>Entropy doi: 10.3390/e20020114

Authors: Xiaowen Zhang Xingzhao Liu

In this paper, the problem of cognitive radar (CR) waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR). To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM) signal, waveforms designed based on maximum detection probability and maximum mutual information (MI) criteria can make radar echoes contain more multiple-target information and improve radar performance as a result.

]]>Entropy doi: 10.3390/e20020113

Authors: Piero Quarati Antonio Scarfone Giorgio Kaniadakis

Negative contribution of entropy (negentropy) of a non-cahotic system, representing the potential of work, is a source of energy that can be transferred to an internal or inserted subsystem. In this case, the system loses order and its entropy increases. The subsystem increases its energy and can perform processes that otherwise would not happen, like, for instance, the nuclear fusion of inserted deuterons in liquid metal matrix, among many others. The role of positive and negative contributions of free energy and entropy are explored with their constraints. The energy available to an inserted subsystem during a transition from a non-equilibrium to the equilibrium chaotic state, when particle interaction (element of the system) is switched off, is evaluated. A few examples are given concerning some non-ideal systems and a possible application to the nuclear reaction screening problem is mentioned.

]]>Entropy doi: 10.3390/e20020112

Authors: Jiandong Duan Xinyu Qiu Wentao Ma Xuan Tian Di Shang

In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM) model with a maximum correntropy criterion (MCC) to forecast the electricity consumption (EC). Firstly, the electricity characteristics of various industries are analyzed to determine the factors that mainly affect the changes in electricity, such as the gross domestic product (GDP), temperature, and so on. Secondly, according to the statistics of the status quo of the small sample data, the LSSVM model is employed as the prediction model. In order to optimize the parameters of the LSSVM model, we further use the local similarity function MCC as the evaluation criterion. Thirdly, we employ the K-fold cross-validation and grid searching methods to improve the learning ability. In the experiments, we have used the EC data of Shaanxi Province in China to evaluate the proposed prediction scheme, and the results show that the proposed prediction scheme outperforms the method based on the traditional LSSVM model.

]]>Entropy doi: 10.3390/e20020111

Authors: Yanina Shkel Sergio Verdú

In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature.

]]>Entropy doi: 10.3390/e20020110

Authors: Yosra Marnissi Emilie Chouzenoux Amel Benazza-Benyahia Jean-Christophe Pesquet

In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC) algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous correlations. Thus, the computational cost of each iteration of the Gibbs sampler is significantly reduced while ensuring good mixing properties.

]]>Entropy doi: 10.3390/e20020109

Authors: Peter Egolf Kolumban Hutter

The extended thermodynamics of Tsallis is reviewed in detail and applied to turbulence. It is based on a generalization of the exponential and logarithmic functions with a parameter q. By applying this nonequilibrium thermodynamics, the Boltzmann-Gibbs thermodynamic approach of Kraichnan to 2-d turbulence is generalized. This physical modeling implies fractional calculus methods, obeying anomalous diffusion, described by Lévy statistics with q &lt; 5/3 (sub diffusion), q = 5/3 (normal or Brownian diffusion) and q &gt; 5/3 (super diffusion). The generalized energy spectrum of Kraichnan, occurring at small wave numbers k, now reveals the more general and precise result k−q. This corresponds well for q = 5/3 with the Kolmogorov-Oboukov energy spectrum and for q &gt; 5/3 to turbulence with intermittency. The enstrophy spectrum, occurring at large wave numbers k, leads to a k−3q power law, suggesting that large wave-number eddies are in thermodynamic equilibrium, which is characterized by q = 1, finally resulting in Kraichnan’s correct k−3 enstrophy spectrum. The theory reveals in a natural manner a generalized temperature of turbulence, which in the non-equilibrium energy transfer domain decreases with wave number and shows an energy equipartition law with a constant generalized temperature in the equilibrium enstrophy transfer domain. The article contains numerous new results; some are stated in form of eight new (proven) propositions.

]]>Entropy doi: 10.3390/e20020108

Authors: Yang Cao Liyan Xie Yao Xie Huan Xu

Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

]]>Entropy doi: 10.3390/e20020107

Authors: Jiwon Kang

This paper considers the problem of testing for parameter change in random coefficient integer-valued autoregressive models. To overcome some size distortions of the existing estimate-based cumulative sum (CUSUM) test, we suggest estimating function-based test and residual-based CUSUM test. More specifically, we employ the estimating function of the conditional least squares estimator. Under the regularity conditions and the null hypothesis, we derive their limiting distributions, respectively. Simulation results demonstrate the validity of the proposed tests. A real data analysis is performed on the polio incidence data.

]]>Entropy doi: 10.3390/e20020106

Authors: Renata Rychtáriková Jan Korbel Petr Macháček Dalibor Štys

We introduce novel information-entropic variables—a Point Divergence Gain ( Ω α ( l → m ) ), a Point Divergence Gain Entropy ( I α ), and a Point Divergence Gain Entropy Density ( P α )—which are derived from the Rényi entropy and describe spatio-temporal changes between two consecutive discrete multidimensional distributions. The behavior of Ω α ( l → m ) is simulated for typical distributions and, together with I α and P α , applied in analysis and characterization of series of multidimensional datasets of computer-based and real images.

]]>Entropy doi: 10.3390/e20020105

Authors: Nicolas Gisin

In Bohmian mechanics, particles follow continuous trajectories, so two-time position correlations have been well defined. However, Bohmian mechanics predicts the violation of Bell inequalities. Motivated by this fact, we investigate position measurements in Bohmian mechanics by coupling the particles to macroscopic pointers. This explains the violation of Bell inequalities despite two-time position correlations. We relate this fact to so-called surrealistic trajectories that, in our model, correspond to slowly moving pointers. Next, we emphasize that Bohmian mechanics, which does not distinguish between microscopic and macroscopic systems, implies that the quantum weirdness of quantum physics also shows up at the macro-scale. Finally, we discuss the fact that Bohmian mechanics is attractive to philosophers but not so much to physicists and argue that the Bohmian community is responsible for the latter.

]]>Entropy doi: 10.3390/e20020104

Authors: Jie Hu Shaobo Li Yong Yao Liya Yu Guanci Yang Jianjun Hu

Many text mining tasks such as text retrieval, text summarization, and text comparisons depend on the extraction of representative keywords from the main text. Most existing keyword extraction algorithms are based on discrete bag-of-words type of word representation of the text. In this paper, we propose a patent keyword extraction algorithm (PKEA) based on the distributed Skip-gram model for patent classification. We also develop a set of quantitative performance measures for keyword extraction evaluation based on information gain and cross-validation, based on Support Vector Machine (SVM) classification, which are valuable when human-annotated keywords are not available. We used a standard benchmark dataset and a homemade patent dataset to evaluate the performance of PKEA. Our patent dataset includes 2500 patents from five distinct technological fields related to autonomous cars (GPS systems, lidar systems, object recognition systems, radar systems, and vehicle control systems). We compared our method with Frequency, Term Frequency-Inverse Document Frequency (TF-IDF), TextRank and Rapid Automatic Keyword Extraction (RAKE). The experimental results show that our proposed algorithm provides a promising way to extract keywords from patent texts for patent classification.

]]>Entropy doi: 10.3390/e20020103

Authors: Jürn Schmelzer Timur Tropin

A critical analysis of possible (including some newly proposed) definitions of the vitreous state and the glass transition is performed and an overview of kinetic criteria of vitrification is presented. On the basis of these results, recent controversial discussions on the possible values of the residual entropy of glasses are reviewed. Our conclusion is that the treatment of vitrification as a process of continuously breaking ergodicity with entropy loss and a residual entropy tending to zero in the limit of zero absolute temperature is in disagreement with the absolute majority of experimental and theoretical investigations of this process and the nature of the vitreous state. This conclusion is illustrated by model computations. In addition to the main conclusion derived from these computations, they are employed as a test for several suggestions concerning the behavior of thermodynamic coefficients in the glass transition range. Further, a brief review is given on possible ways of resolving the Kauzmann paradox and its implications with respect to the validity of the third law of thermodynamics. It is shown that neither in its primary formulations nor in its consequences does the Kauzmann paradox result in contradictions with any basic laws of nature. Such contradictions are excluded by either crystallization (not associated with a pseudospinodal as suggested by Kauzmann) or a conventional (and not an ideal) glass transition. Some further so far widely unexplored directions of research on the interplay between crystallization and glass transition are anticipated, in which entropy may play—beyond the topics widely discussed and reviewed here—a major role.

]]>Entropy doi: 10.3390/e20020102

Authors: Zhuocheng Xiao Binxu Wang Andrew Sornborger Louis Tao

Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the graded transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers.

]]>Entropy doi: 10.3390/e20020101

Authors: Ashraf Al-Quran Nasruddin Hassan

This paper introduces a novel soft computing technique, called the complex neutrosophic soft expert relation (CNSER), to evaluate the degree of interaction between two hybrid models called complex neutrosophic soft expert sets (CNSESs). CNSESs are used to represent two-dimensional data that are imprecise, uncertain, incomplete and indeterminate. Moreover, it has a mechanism to incorporate the parameter set and the opinions of all experts in one model, thus making it highly suitable for use in decision-making problems where the time factor plays a key role in determining the final decision. The complex neutrosophic soft expert set and complex neutrosophic soft expert relation are both defined. Utilizing the properties of CNSER introduced, an empirical study is conducted on the relationship between the variability of the currency exchange rate and Malaysian exports and the time frame (phase) of the interaction between these two variables. This study is supported further by an algorithm to determine the type and the degree of this relationship. A comparison between different existing relations and CNSER to show the ascendancy of our proposed CNSER is provided. Then, the notion of the inverse, complement and composition of CNSERs along with some related theorems and properties are introduced. Finally, we define the symmetry, transitivity and reflexivity of CNSERs, as well as the equivalence relation and equivalence classes on CNSESs. Some interesting properties are also obtained.

]]>Entropy doi: 10.3390/e20020100

Authors: Elaheh Rabiei Enrique Droguett Mohammad Modarres

A fully adaptive particle filtering algorithm is proposed in this paper which is capable of updating both state process models and measurement models separately and simultaneously. The approach is a significant step toward more realistic online monitoring or tracking damage. The majority of the existing methods for Bayes filtering are based on predefined and fixed state process and measurement models. Simultaneous estimation of both state and model parameters has gained attention in recent literature. Some works have been done on updating the state process model. However, not many studies exist regarding an update of the measurement model. In most of the real-world applications, the correlation between measurements and the hidden state of damage is not defined in advance and, therefore, presuming an offline fixed measurement model is not promising. The proposed approach is based on optimizing relative entropy or Kullback–Leibler divergence through a particle filtering algorithm. The proposed algorithm is successfully applied to a case study of online fatigue damage estimation in composite materials.

]]>Entropy doi: 10.3390/e20020099

Authors: Dinu Coltuc Mihai Datcu Daniela Coltuc

This paper investigates the usefulness of the normalized compression distance (NCD) for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated) versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center) and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average). The promising vector configurations (geometric transform, lossless compressor) are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation).

]]>Entropy doi: 10.3390/e20020098

Authors: Jordi Piñero Ricard Solé

Life evolved on our planet by means of a combination of Darwinian selection and innovations leading to higher levels of complexity. The emergence and selection of replicating entities is a central problem in prebiotic evolution. Theoretical models have shown how populations of different types of replicating entities exclude or coexist with other classes of replicators. Models are typically kinetic, based on standard replicator equations. On the other hand, the presence of thermodynamical constraints for these systems remain an open question. This is largely due to the lack of a general theory of statistical methods for systems far from equilibrium. Nonetheless, a first approach to this problem has been put forward in a series of novel developements falling under the rubric of the extended second law of thermodynamics. The work presented here is twofold: firstly, we review this theoretical framework and provide a brief description of the three fundamental replicator types in prebiotic evolution: parabolic, malthusian and hyperbolic. Secondly, we employ these previously mentioned techinques to explore how replicators are constrained by thermodynamics. Finally, we comment and discuss where further research should be focused on.

]]>Entropy doi: 10.3390/e20020097

Authors: Diego Marcondes Adilson Simonis Junior Barrera

This paper uses a classical approach to feature selection: minimization of a cost function applied on estimated joint distributions. However, in this new formulation, the optimization search space is extended. The original search space is the Boolean lattice of features sets (BLFS), while the extended one is a collection of Boolean lattices of ordered pairs (CBLOP), that is (features, associated value), indexed by the elements of the BLFS. In this approach, we may not only select the features that are most related to a variable Y, but also select the values of the features that most influence the variable or that are most prone to have a specific value of Y. A local formulation of Shannon’s mutual information, which generalizes Shannon’s original definition, is applied on a CBLOP to generate a multiple resolution scale for characterizing variable dependence, the Local Lift Dependence Scale (LLDS). The main contribution of this paper is to define and apply the LLDS to analyse local properties of joint distributions that are neglected by the classical Shannon’s global measure in order to select features. This approach is applied to select features based on the dependence between: i—the performance of students on university entrance exams and on courses of their first semester in the university; ii—the congress representative party and his vote on different matters; iii—the cover type of terrains and several terrain properties.

]]>Entropy doi: 10.3390/e20020096

Authors: José Pallero María Fernández-Muñiz Ana Cernea Óscar Álvarez-Machancoses Luis Pedruelo-González Sylvain Bonvalot Juan Fernández-Martínez

Most inverse problems in the industry (and particularly in geophysical exploration) are highly underdetermined because the number of model parameters too high to achieve accurate data predictions and because the sampling of the data space is scarce and incomplete; it is always affected by different kinds of noise. Additionally, the physics of the forward problem is a simplification of the reality. All these facts result in that the inverse problem solution is not unique; that is, there are different inverse solutions (called equivalent), compatible with the prior information that fits the observed data within similar error bounds. In the case of nonlinear inverse problems, these equivalent models are located in disconnected flat curvilinear valleys of the cost-function topography. The uncertainty analysis consists of obtaining a representation of this complex topography via different sampling methodologies. In this paper, we focus on the use of a particle swarm optimization (PSO) algorithm to sample the region of equivalence in nonlinear inverse problems. Although this methodology has a general purpose, we show its application for the uncertainty assessment of the solution of a geophysical problem concerning gravity inversion in sedimentary basins, showing that it is possible to efficiently perform this task in a sampling-while-optimizing mode. Particularly, we explain how to use and analyze the geophysical models sampled by exploratory PSO family members to infer different descriptors of nonlinear uncertainty.

]]>Entropy doi: 10.3390/e20020095

Authors: Giovanni Santonastaso Armando Di Nardo Michele Di Natale Carlo Giudicianni Roberto Greco

Robustness of water distribution networks is related to their connectivity and topological structure, which also affect their reliability. Flow entropy, based on Shannon’s informational entropy, has been proposed as a measure of network redundancy and adopted as a proxy of reliability in optimal network design procedures. In this paper, the scaling properties of flow entropy of water distribution networks with their size and other topological metrics are studied. To such aim, flow entropy, maximum flow entropy, link density and average path length have been evaluated for a set of 22 networks, both real and synthetic, with different size and topology. The obtained results led to identify suitable scaling laws of flow entropy and maximum flow entropy with water distribution network size, in the form of power–laws. The obtained relationships allow comparing the flow entropy of water distribution networks with different size, and provide an easy tool to define the maximum achievable entropy of a specific water distribution network. An example of application of the obtained relationships to the design of a water distribution network is provided, showing how, with a constrained multi-objective optimization procedure, a tradeoff between network cost and robustness is easily identified.

]]>Entropy doi: 10.3390/e20020094

Authors: Lara Ortiz-Martin Pablo Picazo-Sanchez Pedro Peris-Lopez Juan Tapiador

The proliferation of wearable and implantable medical devices has given rise to an interest in developing security schemes suitable for these systems and the environment in which they operate. One area that has received much attention lately is the use of (human) biological signals as the basis for biometric authentication, identification and the generation of cryptographic keys. The heart signal (e.g., as recorded in an electrocardiogram) has been used by several researchers in the last few years. Specifically, the so-called Inter-Pulse Intervals (IPIs), which is the time between two consecutive heartbeats, have been repeatedly pointed out as a potentially good source of entropy and are at the core of various recent authentication protocols. In this work, we report the results of a large-scale statistical study to determine whether such an assumption is (or not) upheld. For this, we have analyzed 19 public datasets of heart signals from the Physionet repository, spanning electrocardiograms from 1353 subjects sampled at different frequencies and with lengths that vary between a few minutes and several hours. We believe this is the largest dataset on this topic analyzed in the literature. We have then applied a standard battery of randomness tests to the extracted IPIs. Under the algorithms described in this paper and after analyzing these 19 public ECG datasets, our results raise doubts about the use of IPI values as a good source of randomness for cryptographic purposes. This has repercussions both in the security of some of the protocols proposed up to now and also in the design of future IPI-based schemes.

]]>Entropy doi: 10.3390/e20020093

Authors: Axel Groniewsky Attila Imre

The shape of the working fluid’s temperature-entropy saturation boundary has a strong influence, not only on the process parameters and efficiency of the Organic Rankine Cycle, but also on the design (the layout) of the equipment. In this paper, working fluids are modelled by the Redlich-Kwong equation of state. It is demonstrated that a limiting isochoric heat capacity might exist between dry and wet fluids. With the Redlich-Kwong equation of state, this limit can be predicted with good accuracy for several fluids, including alkanes.

]]>Entropy doi: 10.3390/e20020092

Authors: Imran Khan Mohammad Zafar Mohammad Jan Jaime Lloret Mohammed Basheri Dhananjay Singh

Uplink and Downlink channel estimation in massive Multiple Input Multiple Output (MIMO) systems is an intricate issue because of the increasing channel matrix dimensions. The channel feedback overhead using traditional codebook schemes is very large, which consumes more bandwidth and decreases the overall system efficiency. The purpose of this paper is to decrease the channel estimation overhead by taking the advantage of sparse attributes and also to optimize the Energy Efficiency (EE) of the system. To cope with this issue, we propose a novel approach by using Compressed-Sensing (CS), Block Iterative-Support-Detection (Block-ISD), Angle-of-Departure (AoD) and Structured Compressive Sampling Matching Pursuit (S-CoSaMP) algorithms to reduce the channel estimation overhead and compare them with the traditional algorithms. The CS uses temporal-correlation of time-varying channels to produce Differential-Channel Impulse Response (DCIR) among two CIRs that are adjacent in time-slots. DCIR has greater sparsity than the conventional CIRs as it can be easily compressed. The Block-ISD uses spatial-correlation of the channels to obtain the block-sparsity which results in lower pilot-overhead. AoD quantizes the channels whose path-AoDs variation is slower than path-gains and such information is utilized for reducing the overhead. S-CoSaMP deploys structured-sparsity to obtain reliable Channel-State-Information (CSI). MATLAB simulation results show that the proposed CS based algorithms reduce the feedback and pilot-overhead by a significant percentage and also improve the system capacity as compared with the traditional algorithms. Moreover, the EE level increases with increasing Base Station (BS) density, UE density and lowering hardware impairments level.

]]>Entropy doi: 10.3390/e20020091

Authors: Jozef Strečka

The mixed spin-1/2 and spin-S Ising model on the Union Jack (centered square) lattice with four different three-spin (triplet) interactions and the uniaxial single-ion anisotropy is exactly solved by establishing a rigorous mapping equivalence with the corresponding zero-field (symmetric) eight-vertex model on a dual square lattice. A rigorous proof of the aforementioned exact mapping equivalence is provided by two independent approaches exploiting either a graph-theoretical or spin representation of the zero-field eight-vertex model. An influence of the interaction anisotropy as well as the uniaxial single-ion anisotropy on phase transitions and critical phenomena is examined in particular. It is shown that the considered model exhibits a strong-universal critical behaviour with constant critical exponents when considering the isotropic model with four equal triplet interactions or the anisotropic model with one triplet interaction differing from the other three. The anisotropic models with two different triplet interactions, which are pairwise equal to each other, contrarily exhibit a weak-universal critical behaviour with critical exponents continuously varying with a relative strength of the triplet interactions as well as the uniaxial single-ion anisotropy. It is evidenced that the variations of critical exponents of the mixed-spin Ising models with the integer-valued spins S differ basically from their counterparts with the half-odd-integer spins S.

]]>Entropy doi: 10.3390/e20020090

Authors: Harkaitz Eguiraun Oskar Casquero Iciar Martinez

The present study investigates the suitability of a machine vision-based method to detect deviations in the Shannon entropy (SE) of a European seabass (Dicentrarchus labrax) biological system fed with different selenium:mercury (Se:Hg) molar ratios. Four groups of fish were fed during 14 days with commercial feed (control) and with the same feed spiked with 0.5, 5 and 10 mg of MeHg per kg, giving Se:Hg molar ratios of 29.5 (control-C1); 6.6, 0.8 and 0.4 (C2, C3 and C4). The basal SE of C1 and C2 (Se:Hg &gt; 1) tended to increase during the experimental period, while that of C3 and C4 (Se:Hg &lt; 1) tended to decrease. In addition, the differences in the SE of the four systems in response to a stochastic event minus that of the respective basal states were less pronounced in the systems fed with Se:Hg molar ratios lower than one (C3 and C4). These results indicate that the SE may be a suitable indicator for the prediction of seafood safety and fish health (i.e., the Se:Hg molar ratio and not the Hg concentration alone) prior to the displaying of pathological symptoms. We hope that this work can serve as a first step for further investigations to confirm and validate the present results prior to their potential implementation in practical settings.

]]>Entropy doi: 10.3390/e20020089

Authors: Shengwei Huang Chengzhou Li Tianyu Tan Peng Fu Ligang Wang Yongping Yang

To maximize the system-level heat integration, three retrofit concepts of waste heat recovery via organic Rankine cycle (ORC), in-depth boiler-turbine integration, and coupling of both are proposed, analyzed and comprehensively compared in terms of thermodynamic and economic performances. For thermodynamic analysis, exergy analysis is employed with grand composite curves illustrated to identify how the systems are fundamentally and quantitatively improved, and to highlight key processes for system improvement. For economic analysis, annual revenue and investment payback period are calculated based on the estimation of capital investment of each component to identify the economic feasibility and competitiveness of each retrofit concept proposed. The results show that the in-depth boiler-turbine integration achieves a better temperature match of heat flows involved for different fluids and multi-stage air preheating, thus a significant improvement of power output (23.99 MW), which is much larger than that of the system with only ORC (6.49 MW). This is mainly due to the limitation of the ultra-low temperature (from 135 to 75 °C) heat available from the flue gas for ORC. The thermodynamic improvement is mostly contributed by the reduction of exergy destruction within the boiler subsystem, which is eventually converted to mechanical power; while the exergy destruction within the turbine system is almost not changed for the three concepts. The selection of ORC working fluids is performed to maximize the power output. Due to the low-grade heat source, the cycle with R11 offers the largest additional net power generation but is not significantly better than the other preselected working fluids. Economically, the in-depth boiler-turbine integration is the most economic completive solution with a payback period of only 0.78 year. The ORC concept is less attractive for a sole application due to a long payback time (2.26 years). However, by coupling both concepts, a net power output of 26.51 MW and a payback time of almost one year are achieved, which may promote the large-scale production and deployment of ORC with a cost reduction and competitiveness enhancement.

]]>Entropy doi: 10.3390/e20020088

Authors: Yan Li Shaoyi Xu

Licensed Assisted Access (LAA) is considered one of the latest groundbreaking innovations to provide high performance in future 5G. Coexistence schemes such as Listen Before Talk (LBT) and Carrier Sensing and Adaptive Transmission (CSAT) have been proven to be good methods to share spectrums, and they are WiFi friendly. In this paper, a modified LBT-based CSAT scheme is proposed which can effectively reduce the collision at the moment when Long Term Evolution (LTE) starts to transmit data in CSAT mode. To make full use of the valuable spectrum resources, the throughput of both LAA and WiFi systems should be improved. Thus, a two-layer Coalition-Auction Game-based Transaction (CAGT) mechanism is proposed in this paper to optimize the performance of the two systems. In the first layer, a coalition among Access Points (APs) is built to balance the WiFi stations and maximize the WiFi throughput. The main idea of the devised coalition forming is to merge the light-loaded APs with heavy-loaded APs into a coalition; consequently, the data of the overloaded APs can be offloaded to the light-loaded APs. Next, an auction game between the LAA and WiFi systems is used to gain a win–win strategy, in which, LAA Base Station (BS) is the auctioneer and AP coalitions are bidders. Thus, the throughput of both systems are improved. Simulation results demonstrate that the proposed scheme in this paper can improve the performance of both two systems effectively.

]]>Entropy doi: 10.3390/e20020087

Authors: Francisco Vazquez-Araujo Adriana Dapena María Souto-Salorio Paula Castro

The computation of a set constituted by few vertices to define a virtual backbone supporting information interchange is a problem that arises in many areas when analysing networks of different natures, like wireless, brain, or social networks. Recent papers propose obtaining such a set of vertices by computing the connected dominating set (CDS) of a graph. In recent works, the CDS has been obtained by considering that all vertices exhibit similar characteristics. However, that assumption is not valid for complex networks in which their vertices can play different roles. Therefore, we propose finding the CDS by taking into account several metrics which measure the importance of each network vertex e.g., error probability, entropy, or entropy variation (EV).

]]>Entropy doi: 10.3390/e20020086

Authors: Guanghui Xu Yasser Shekofteh Akif Akgül Chunbiao Li Shirin Panahi

In this paper, we introduce a new chaotic system that is used for an engineering application of the signal encryption. It has some interesting features, and its successful implementation and manufacturing were performed via a real circuit as a random number generator. In addition, we provide a parameter estimation method to extract chaotic model parameters from the real data of the chaotic circuit. The parameter estimation method is based on the attractor distribution modeling in the state space, which is compatible with the chaotic system characteristics. Here, a Gaussian mixture model (GMM) is used as a main part of cost function computations in the parameter estimation method. To optimize the cost function, we also apply two recent efficient optimization methods: WOA (Whale Optimization Algorithm), and MVO (Multi-Verse Optimizer) algorithms. The results show the success of the parameter estimation procedure.

]]>Entropy doi: 10.3390/e20020085

Authors: Łukasz Dębowski

As we discuss, a stationary stochastic process is nonergodic when a random persistent topic can be detected in the infinite random text sampled from the process, whereas we call the process strongly nonergodic when an infinite sequence of independent random bits, called probabilistic facts, is needed to describe this topic completely. Replacing probabilistic facts with an algorithmically random sequence of bits, called algorithmic facts, we adapt this property back to ergodic processes. Subsequently, we call a process perigraphic if the number of algorithmic facts which can be inferred from a finite text sampled from the process grows like a power of the text length. We present a simple example of such a process. Moreover, we demonstrate an assertion which we call the theorem about facts and words. This proposition states that the number of probabilistic or algorithmic facts which can be inferred from a text drawn from a process must be roughly smaller than the number of distinct word-like strings detected in this text by means of the Prediction by Partial Matching (PPM) compression algorithm. We also observe that the number of the word-like strings for a sample of plays by Shakespeare follows an empirical stepwise power law, in a stark contrast to Markov processes. Hence, we suppose that natural language considered as a process is not only non-Markov but also perigraphic.

]]>Entropy doi: 10.3390/e20020084

Authors: Fabio Lingua Andrea Richaud Vittorio Penna

Motivated by the importance of entanglement and correlation indicators in the analysis of quantum systems, we study the equilibrium and the bipartite residual entropy in a two-species Bose–Hubbard dimer when the spatial phase separation of the two species takes place. We consider both the zero and non-zero-temperature regime. We present different kinds of residual entropies (each one associated with a different way of partitioning the system), and we show that they strictly depend on the specific quantum phase characterizing the two species (supermixed, mixed or demixed) even at finite temperature. To provide a deeper physical insight into the zero-temperature scenario, we apply the fully-analytical variational approach based on su(2) coherent states and provide a considerably good approximation of the entanglement entropy. Finally, we show that the effectiveness of bipartite residual entropy as a critical indicator at non-zero temperature is unchanged when considering a restricted combination of energy eigenstates.

]]>Entropy doi: 10.3390/e20020083

Authors: José Isidro Pedro Fernández de Córdoba

The holographic principle sets an upper bound on the total (Boltzmann) entropy content of the Universe at around 10 123 k B ( k B being Boltzmann’s constant). In this work we point out the existence of a remarkable duality between nonrelativistic quantum mechanics on the one hand, and Newtonian cosmology on the other. Specifically, nonrelativistic quantum mechanics has a quantum probability fluid that exactly mimics the behaviour of the cosmological fluid, the latter considered in the Newtonian approximation. One proves that the equations governing the cosmological fluid (the Euler equation and the continuity equation) become the very equations that govern the quantum probability fluid after applying the Madelung transformation to the Schroedinger wavefunction. Under the assumption that gravitational equipotential surfaces can be identified with isoentropic surfaces, this model allows for a simple computation of the gravitational entropy of a Newtonian Universe. In a first approximation, we model the cosmological fluid as the quantum probability fluid of free Schroedinger waves. We find that this model Universe saturates the holographic bound. As a second approximation, we include the Hubble expansion of the galaxies. The corresponding Schroedinger waves lead to a value of the entropy lying three orders of magnitude below the holographic bound. Current work on a fully relativistic extension of our present model can be expected to yield results in even better agreement with empirical estimates of the entropy of the Universe.

]]>Entropy doi: 10.3390/e20020082

Authors: David Looney Tricia Adjei Danilo Mandic

Approximate and sample entropy (AE and SE) provide robust measures of the deterministic or stochastic content of a time series (regularity), as well as the degree of structural richness (complexity), through operations at multiple data scales. Despite the success of the univariate algorithms, multivariate sample entropy (mSE) algorithms are still in their infancy and have considerable shortcomings. Not only are existing mSE algorithms unable to analyse within- and cross-channel dynamics, they can counter-intuitively interpret increased correlation between variates as decreased regularity. To this end, we first revisit the embedding of multivariate delay vectors (DVs), critical to ensuring physically meaningful and accurate analysis. We next propose a novel mSE algorithm and demonstrate its improved performance over existing work, for synthetic data and for classifying wake and sleep states from real-world physiological data. It is furthermore revealed that, unlike other tools, such as the correlation of phase synchrony, synchronized regularity dynamics are uniquely identified via mSE analysis. In addition, a model for the operation of this novel algorithm in the presence of white Gaussian noise is presented, which, in contrast to the existing algorithms, reveals for the first time that increasing correlation between different variates reduces entropy.

]]>Entropy doi: 10.3390/e20020072

Authors: Luis Quezada Eduardo Nahmad-Achar

We show that the entropy of entanglement is sensitive to the coherent quantum phase transition between normal and super-radiant regions of a system of a finite number of three-level atoms interacting in a dipolar approximation with a one-mode electromagnetic field. The atoms are treated as semi-distinguishable using different cooperation numbers and representations of SU(3), variables which are relevant to the sensitivity of the entropy with the transition. The results are computed for all three possible configurations ( Ξ , Λ and V) of the three-level atoms.

]]>Entropy doi: 10.3390/e20020051

Authors: Oliver M. Cliff Mikhail Prokopenko Robert Fitch

The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems.

]]>Entropy doi: 10.3390/e20020043

Authors: Sara Gasparini Maurizio Campolo Cosimo Ieracitano Nadia Mammone Edoardo Ferlazzo Chiara Sueri Giovanbattista Tripodi Umberto Aguglia Francesco Morabito

The use of a deep neural network scheme is proposed to help clinicians solve a difficult diagnosis problem in neurology. The proposed multilayer architecture includes a feature engineering step (from time-frequency transformation), a double compressing stage trained by unsupervised learning, and a classification stage trained by supervised learning. After fine-tuning, the deep network is able to discriminate well the class of patients from controls with around 90% sensitivity and specificity. This deep model gives better classification performance than some other standard discriminative learning algorithms. As in clinical problems there is a need for explaining decisions, an effort has been carried out to qualitatively justify the classification results. The main novelty of this paper is indeed to give an entropic interpretation of how the deep scheme works and reach the final decision.

]]>Entropy doi: 10.3390/e20020068

Authors: Huining Xu Hengzhen Li Yiqiu Tan Linbing Wang Yue Hou

The thermodynamic behavior of asphalt mixtures is critical to the engineers since it directly relates to the damage in asphalt mixtures. However, most of the current research of the freeze-thaw damage of asphalt mixtures is focused on the bulk body from the macroscale and lacks a fundamental understanding of the thermodynamic behaviors of asphalt mixtures from the microscale perspective. In this paper, to identify the important thermodynamic behaviors of asphalt mixtures under freeze-thaw loading cycle, the information entropy theory, an X-ray computerized tomography (CT) scanner and digital image processing technology are employed. The voids, the average size of the voids, the connected porosity, and the void number are extracted according to the scanned images. Based on the experiments and the CT scanned images, the information entropy evolution of the asphalt mixtures under different freeze-thaw cycles is calculated and the relationship between the change of information entropy and the pore structure characteristics is established. Then, the influences of different freezing and thawing conditions on the thermodynamic behaviors of asphalt mixtures are compared. The combination of information entropy theory and CT scanning technique proposed in this paper provides an innovative approach to investigate the thermodynamics behaviors of asphalt mixtures and a new way to analyze the freeze-thaw damage in asphalt mixtures.

]]>Entropy doi: 10.3390/e20010081

Authors: Christina Papenfuss Wolfgang Muschik

Internal and mesoscopic variables differ fundamentally from each other: both are state space variables, but mesoscopic variables are additionally equipped with a distribution function introducing a statistical item into consideration which is missing in connection with internal variables. Thus, the alignment tensor of the liquid crystal theory can be introduced as an internal variable or as one generated by a mesoscopic background using the microscopic director as a mesoscopic variable. Because the mesoscopic variable is part of the state space, the corresponding balance equations change into mesoscopic balances, and additionally an evolution equation of the mesoscopic distribution function appears. The flexibility of the mesoscopic concept is not only demonstrated for liquid crystals, but is also discussed for dipolar media and flexible fibers.

]]>Entropy doi: 10.3390/e20010080

Authors: Zhibing Li Yimin Liu Wei Zheng Chengguang Bao

For spin-1 condensates, the spatial degrees of freedom can be considered as being frozen at temperature zero, while the spin-degrees of freedom remain free. Under this condition, the entanglement entropy has been derived exactly with an analytical form. The entanglement entropy is found to decrease monotonically with the increase of the magnetic polarization as expected. However, for the ground state in polar phase, an extremely steep fall of the entropy is found when the polarization emerges from zero. Then the fall becomes a gentle descent after the polarization exceeds a turning point.

]]>Entropy doi: 10.3390/e20010079

Authors: Makan Zamanipour

The paper technically proposes a newly secure scheme for simultaneous wireless power and information transfer (SWIPT) frameworks. We take into account an orthogonal frequency division multiplexing (OFDM)-based game which is in relation to a multi-input multi-output multi-antenna Eavesdropper (MIMOME) strategy. The transceiver is generally able to witness the case imperfect channel state information (ICSI) at the transmitter side. Transferring power and information are conducted via orthogonally provided sub-carriers. We propose a two-step Stackelberg game to optimise the Utility Functions of both power and information parts. The price for the first stage (in connection with information) is the total power of the other sub-carriers over which the energy is supported. In this stage, the sum secrecy rate should be essentially maximised. The second level of the proposed Stackelberg game is in association with the energy part. In this stage, the price essentially is the total power of the other sub-carriers over which the information is transferred. In this stage, additionally, the total power transferred is fundamentally maximised. Subsequently, the optimally and near-optimally mathematical solutions are derived, for some special cases such as ICSI one. Finally, the simulations validate our scheme as well, authenticating our contribution’s tightness and efficiency.

]]>Entropy doi: 10.3390/e20010078

Authors: Xin Liang Yu-Gui Yang Feng Gao Xiao-Jun Yang Yi Xue

In this paper, an anomalous advection-dispersion model involving a new general Liouville–Caputo fractional-order derivative is addressed for the first time. The series solutions of the general fractional advection-dispersion equations are obtained with the aid of the Laplace transform. The results are given to demonstrate the efficiency of the proposed formulations to describe the anomalous advection dispersion processes.

]]>Entropy doi: 10.3390/e20010067

Authors: Michal Munk Lubomir Benko

The paper is focused on an examination of the use of entropy in the field of web usage mining. Entropy creates an alternative possibility of determining the ratio of auxiliary pages in the session identification using the Reference Length method. The experiment was conducted on two different web portals. The first log file was obtained from a course of virtual learning environment web portal. The second log file was received from the web portal with anonymous access. A comparison of the results of entropy estimation of the ratio of auxiliary pages and a sitemap estimation of the ratio of auxiliary pages showed that in the case of sitemap abundance, entropy could be a full-valued substitution for the estimate of the ratio of auxiliary pages.

]]>Entropy doi: 10.3390/e20010073

Authors: Zhipeng Wang Limin Jia Yong Qin

Rotating machineries often work under severe and variable operation conditions, which brings challenges to fault diagnosis. To deal with this challenge, this paper discusses the concept of adaptive diagnosis, which means to diagnose faults under variable operation conditions with self-adaptively and little prior knowledge or human intervention. To this end, a novel algorithm is proposed, information geometrical extreme learning machine with kernel (IG-KELM). From the perspective of information geometry, the structure and Riemannian metric of Kernel-ELM is specified. Based on the geometrical structure, an IG-based conformal transformation is created to improve the generalization ability and self-adaptability of KELM. The proposed IG-KELM, in conjunction with variation mode decomposition (VMD) and singular value decomposition (SVD) is utilized for adaptive diagnosis: (1) VMD, as a new self-adaptive signal processing algorithm is used to decompose the raw signals into several intrinsic mode functions (IMFs). (2) SVD is used to extract the intrinsic characteristics from the matrix constructed with IMFs. (3) IG-KELM is used to diagnose faults under variable conditions self-adaptively with no requirement of prior knowledge or human intervention. Finally, the proposed method was applied on fault diagnosis of a bearing and hydraulic pump. The results show that the proposed method outperforms the conventional method by up to 7.25% and 7.78% respectively, in percentages of accuracy.

]]>Entropy doi: 10.3390/e20010075

Authors: Jinxing Zhao Fangchang Xu

Finite-time thermodynamic models for an Otto cycle, an Atkinson cycle, an over-expansion Miller cycle (M1), an LIVC Miller cycle through late intake valve closure (M2) and an LIVC Miller cycle with constant compression ratio (M3) have been established. The models for the two LIVC Miller cycles are first developed; and the heat-transfer and friction losses are considered with the effects of real engine parameters. A comparative analysis for the energy losses and performances has been conducted. The optimum compression-ratio ranges for the efficiency and effective power are different. The comparative results of cycle performances are influenced together by the ratios of the energy losses and the cycle types. The Atkinson cycle has the maximum peak power and efficiency, but the minimum power density; and the M1 cycle can achieve the optimum comprehensive performances. The less net fuel amount and the high peak cylinder pressure (M3 cycle) have a significantly adverse effect on the loss ratios of the heat-transfer and friction of the M2 and M3 cycles; and the effective power and energy efficiency are always lower than the M1 and Atkinson cycles. When greatly reducing the weights of the heat-transfer and friction, the M3 cycle has significant advantage in the energy efficiency. The results obtained can provide guidance for selecting the cycle type and optimizing the performances of a real engine.

]]>Entropy doi: 10.3390/e20010076

Authors: Saïd Maanan Bogdan Dumitrescu Ciprian Giurcăneanu

This work is focused on latent-variable graphical models for multivariate time series. We show how an algorithm which was originally used for finding zeros in the inverse of the covariance matrix can be generalized such that to identify the sparsity pattern of the inverse of spectral density matrix. When applied to a given time series, the algorithm produces a set of candidate models. Various information theoretic (IT) criteria are employed for deciding the winner. A novel IT criterion, which is tailored to our model selection problem, is introduced. Some options for reducing the computational burden are proposed and tested via numerical examples. We conduct an empirical study in which the algorithm is compared with the state-of-the-art. The results are good, and the major advantage is that the subjective choices made by the user are less important than in the case of other methods.

]]>Entropy doi: 10.3390/e20010077

Authors: Massimiliano Zanin David Gómez-Andrés Irene Pulido-Valdeolivas Juan Martín-Gonzalo Javier López-López Samuel Pascual-Pascual Estrella Rausell

Cerebral palsy is a physical impairment stemming from a brain lesion at perinatal time, most of the time resulting in gait abnormalities: the first cause of severe disability in childhood. Gait study, and instrumental gait analysis in particular, has been receiving increasing attention in the last few years, for being the complex result of the interactions between different brain motor areas and thus a proxy in the understanding of the underlying neural dynamics. Yet, and in spite of its importance, little is still known about how the brain adapts to cerebral palsy and to its impaired gait and, consequently, about the best strategies for mitigating the disability. In this contribution, we present the hitherto first analysis of joint kinematics data using permutation entropy, comparing cerebral palsy children with a set of matched control subjects. We find a significant increase in the permutation entropy for the former group, thus indicating a more complex and erratic neural control of joints and a non-trivial relationship between the permutation entropy and the gait speed. We further show how this information theory measure can be used to train a data mining model able to forecast the child’s condition. We finally discuss the relevance of these results in clinical applications and specifically in the design of personalized medicine interventions.

]]>Entropy doi: 10.3390/e20010074

Authors: Lingen Chen Qinghua Xiao Huijun Feng

Combining entransy theory with constructal theory, this mini-review paper summarizes the constructal optimization work of heat conduction, convective heat transfer, and mass transfer problems during the authors’ working time in the Naval University of Engineering. The entransy dissipation extremum principle (EDEP) is applied in constructal optimizations, and this paper is divided into three parts. The first part is constructal entransy dissipation rate minimizations of heat conduction and finned cooling problems. It includes constructal optimization for a “volume-to-point” heat-conduction assembly with a tapered element, constructal optimizations for “disc-to-point” heat-conduction assemblies with the premise of an optimized last-order construct and without this premise, and constructal optimizations for four kinds of fin assemblies: T-, Y-, umbrella-, and tree-shaped fins. The second part is constructal entransy dissipation rate minimizations of cooling channel and steam generator problems. It includes constructal optimizations for heat generating volumes with tree-shaped and parallel channels, constructal optimization for heat generating volume cooled by forced convection, and constructal optimization for a steam generator. The third part is constructal entransy dissipation rate minimizations of mass transfer problems. It includes constructal optimizations for “volume-to-point” rectangular assemblies with constant and tapered channels, and constructal optimizations for “disc-to-point” assemblies with the premise of an optimized last-order construct and without this premise. The results of the three parts show that the mean heat transfer temperature differences of the heat conduction assemblies are not always decreased when their internal complexity increases. The average heat transfer rate of the steam generator obtained by entransy dissipation rate maximization is increased by 58.7% compared with that obtained by heat transfer rate maximization. Compared with the rectangular mass transfer assembly with a constant high permeability pathway (HPP), the maximum pressure drops of the element and first-order assembly with tapered HPPs are decreased by 6% and 11%, respectively. The global transfer performances of the transfer bodies are improved after optimizations, and new design guidelines derived by EDEP, which are different from the conventional optimization objectives, are provided.

]]>Entropy doi: 10.3390/e20010071

Authors: Pan Zhao Benda Zhou Jixia Wang

In this paper we consider pricing problems of the geometric average Asian options under a non-Gaussian model, in which the underlying stock price is driven by a process based on non-extensive statistical mechanics. The model can describe the peak and fat tail characteristics of returns. Thus, the description of underlying asset price and the pricing of options are more accurate. Moreover, using the martingale method, we obtain closed form solutions for geometric average Asian options. Furthermore, the numerical analysis shows that the model can avoid underestimating risks relative to the Black-Scholes model.

]]>Entropy doi: 10.3390/e20010031

Authors: Ze Zhang Yu Hou Francis Kulacki

Comparative energy and exergy investigations are reported for a transcritical N2O refrigeration cycle with a throttling valve or with an expander when the gas cooler exit temperature varies from 30 to 55 °C and the evaporating temperature varies from −40 to 10 °C. The system performance is also compared with that of similar cycles using CO2. Results show that the N2O expander cycle exhibits a larger maximum cooling coefficient of performance (COP) and lower optimum discharge pressure than that of the CO2 expander cycle and N2O throttling valve cycle. It is found that in the N2O throttling valve cycle, the irreversibility of the throttling valve is maximum and the exergy losses of the gas cooler and compressor are ordered second and third, respectively. In the N2O expander cycle, the largest exergy loss occurs in the gas cooler, followed by the compressor and the expander. Compared with the CO2 expander cycle and N2O throttling valve cycle, the N2O expander cycle has the smallest component-specific exergy loss and the highest exergy efficiency at the same operating conditions and at the optimum discharge pressure. It is also proven that the maximum COP and the maximum exergy efficiency cannot be obtained at the same time for the investigated cycles.

]]>Entropy doi: 10.3390/e20010062

Authors: Vladimir Mladenovic Danijela Milosevic Miroslav Lutovac Yigang Cen Matjaz Debevc

This paper presents a method for shortening the computation time and reducing the number of math operations required in complex calculations for the analysis, simulation, and design of processes and systems. The method is suitable for education and engineering applications. The efficacy of the method is illustrated with a case study of a complex wireless communication system. The computer algebra system (CAS) was applied to formulate hypotheses and define the joint probability density function of a certain modulation technique. This innovative method was used to prepare microsimulation-semi-symbolic analyses to fully specify the wireless system. The development of an iteration-based simulation method that provides closed form solutions is presented. Previously, expressions were solved using time-consuming numerical methods. Students can apply this method for performance analysis and to understand data transfer processes. Engineers and researchers may use the method to gain insight into the impact of the parameters necessary to properly transmit and detect information, unlike traditional numerical methods. This research contributes to this field by improving the ability to obtain closed form solutions of the probability density function, outage probability, and considerably improves time efficiency with shortened computation time and reducing the number of calculation operations.

]]>Entropy doi: 10.3390/e20010070

Authors: Bernhard Geiger

The generator matrices of polar codes and Reed–Muller codes are submatrices of the Kronecker product of a lower-triangular binary square matrix. For polar codes, the submatrix is generated by selecting rows according to their Bhattacharyya parameter, which is related to the error probability of sequential decoding. For Reed–Muller codes, the submatrix is generated by selecting rows according to their Hamming weight. In this work, we investigate the properties of the index sets selecting those rows, in the limit as the blocklength tends to infinity. We compute the Lebesgue measure and the Hausdorff dimension of these sets. We furthermore show that these sets are finely structured and self-similar in a well-defined sense, i.e., they have properties that are common to fractals.

]]>Entropy doi: 10.3390/e20010069

Authors: Domenica Mirauda Marilena Pannone Annamaria De Vincenzo

The three-dimensional structure of river flow and the presence of secondary currents, mainly near walls, often cause the maximum cross-sectional velocity to occur below the free surface, which is known as the “dip” phenomenon. The present study proposes a theoretical model derived from the entropy theory to predict the velocity dip position along with the corresponding velocity value. Field data, collected at three ungauged sections located along the Alzette river in the Grand Duchy of Luxembourg and at three gauged sections located along three large rivers in Basilicata (southern Italy), were used to test its validity. The results show that the model is in good agreement with the experimental measurements and, when compared with other models documented in the literature, yields the least percentage error.

]]>Entropy doi: 10.3390/e20010047

Authors: Frederico Fazan Fernanda Brognara Rubens Fazan Junior Luiz Murta Junior Luiz Virgilio Silva

Quantifying complexity from heart rate variability (HRV) series is a challenging task, and multiscale entropy (MSE), along with its variants, has been demonstrated to be one of the most robust approaches to achieve this goal. Although physical training is known to be beneficial, there is little information about the long-term complexity changes induced by the physical conditioning. The present study aimed to quantify the changes in physiological complexity elicited by physical training through multiscale entropy-based complexity measurements. Rats were subject to a protocol of medium intensity training ( n = 13 ) or a sedentary protocol ( n = 12 ). One-hour HRV series were obtained from all conscious rats five days after the experimental protocol. We estimated MSE, multiscale dispersion entropy (MDE) and multiscale SDiff q from HRV series. Multiscale SDiff q is a recent approach that accounts for entropy differences between a given time series and its shuffled dynamics. From SDiff q , three attributes (q-attributes) were derived, namely SDiff q m a x , q m a x and q z e r o . MSE, MDE and multiscale q-attributes presented similar profiles, except for SDiff q m a x . q m a x showed significant differences between trained and sedentary groups on Time Scales 6 to 20. Results suggest that physical training increases the system complexity and that multiscale q-attributes provide valuable information about the physiological complexity.

]]>Entropy doi: 10.3390/e20010065

Authors: Gagandeep Kaur Harish Garg

Cubic intuitionistic fuzzy (CIF) set is the hybrid set which can contain much more information to express an interval-valued intuitionistic fuzzy set and an intuitionistic fuzzy set simultaneously for handling the uncertainties in the data. Unfortunately, there has been no research on the aggregation operators on CIF sets so far. Since an aggregation operator is an important mathematical tool in decision-making problems, the present paper proposes some new Bonferroni mean and weighted Bonferroni mean averaging operators between the cubic intuitionistic fuzzy numbers for aggregating the different preferences of the decision-maker. Then, we develop a decision-making method based on the proposed operators under the cubic intuitionistic fuzzy environment and illustrated with a numerical example. Finally, a comparison analysis between the proposed and the existing approaches have been performed to illustrate the applicability and feasibility of the developed decision-making method.

]]>Entropy doi: 10.3390/e20010063

Authors: Xiao-Li Ding Juan Nieto

In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. Finally, we give three examples to demonstrate the applicability of our obtained results.

]]>Entropy doi: 10.3390/e20010066

Authors: Entropy Editorial Office

Peer review is an essential part in the publication process, ensuring that Entropy maintains high quality standards for its published papers.[...]

]]>Entropy doi: 10.3390/e20010064

Authors: Yaodong Li Lingyu Chen Haibin Shi Xuemin Hong Jianghong Shi

Personalized content retrieval service has become a major information service that consumes a large portion of mobile Internet traffic. Joint content recommendation and delivery is a promising design philosophy that could effectively improve the overall user experience with personalized content retrieval services. Existing research mostly focused on a push-type design paradigm called proactive caching, which, however, has multiple inherent drawbacks such as high device cost and low energy efficiency. This paper proposes a novel, interactive joint content recommendation and delivery system as an alternative to overcome the drawbacks of proactive caching systems. We present several optimal and heuristic algorithms for the proposed system and analyze the system performance in terms of user interest and transmission outage probability. Some theoretical performance bounds of the system are also derived. The effectiveness of the proposed system and algorithms is validated by simulation results.

]]>Entropy doi: 10.3390/e20010061

Authors: George Manis Md Aktaruzzaman Roberto Sassi

Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.

]]>Entropy doi: 10.3390/e20010060

Authors: Blaž Meden Žiga Emeršič Vitomir Štruc Peter Peer

Image and video data are today being shared between government entities and other relevant stakeholders on a regular basis and require careful handling of the personal information contained therein. A popular approach to ensure privacy protection in such data is the use of deidentification techniques, which aim at concealing the identity of individuals in the imagery while still preserving certain aspects of the data after deidentification. In this work, we propose a novel approach towards face deidentification, called k-Same-Net, which combines recent Generative Neural Networks (GNNs) with the well-known k-Anonymitymechanism and provides formal guarantees regarding privacy protection on a closed set of identities. Our GNN is able to generate synthetic surrogate face images for deidentification by seamlessly combining features of identities used to train the GNN model. Furthermore, it allows us to control the image-generation process with a small set of appearance-related parameters that can be used to alter specific aspects (e.g., facial expressions, age, gender) of the synthesized surrogate images. We demonstrate the feasibility of k-Same-Net in comprehensive experiments on the XM2VTS and CK+ datasets. We evaluate the efficacy of the proposed approach through reidentification experiments with recent recognition models and compare our results with competing deidentification techniques from the literature. We also present facial expression recognition experiments to demonstrate the utility-preservation capabilities of k-Same-Net. Our experimental results suggest that k-Same-Net is a viable option for facial deidentification that exhibits several desirable characteristics when compared to existing solutions in this area.

]]>Entropy doi: 10.3390/e20010036

Authors: Jacques Demongeot Mariem Jelassi Hana Hazgui Slimane Ben Miled Narjes Bellamine Ben Saoud Carla Taramasco

Networks used in biological applications at different scales (molecule, cell and population) are of different types: neuronal, genetic, and social, but they share the same dynamical concepts, in their continuous differential versions (e.g., non-linear Wilson-Cowan system) as well as in their discrete Boolean versions (e.g., non-linear Hopfield system); in both cases, the notion of interaction graph G(J) associated to its Jacobian matrix J, and also the concepts of frustrated nodes, positive or negative circuits of G(J), kinetic energy, entropy, attractors, structural stability, etc., are relevant and useful for studying the dynamics and the robustness of these systems. We will give some general results available for both continuous and discrete biological networks, and then study some specific applications of three new notions of entropy: (i) attractor entropy, (ii) isochronal entropy and (iii) entropy centrality; in three domains: a neural network involved in the memory evocation, a genetic network responsible of the iron control and a social network accounting for the obesity spread in high school environment.

]]>Entropy doi: 10.3390/e20010059

Authors: Paweł Dorosz Paweł Wojcieszak Ziemowit Malecha

LNG (Liquefied Natural Gas) shares in the global energy market is steadily increasing. One possible application of LNG is as a fuel for transportation. Stricter air pollution regulations and emission controls have made the natural gas a promising alternative to liquid petroleum fuels, especially in the case of heavy transport. However, in most LNG-fueled vehicles, the physical exergy of LNG is destroyed in the regasification process. This paper investigates possible LNG exergy recovery systems for transportation. The analyses focus on “cold energy” recovery systems as the enthalpy of LNG, which may be used as cooling power in air conditioning or refrigeration. Moreover, four exergy recovery systems that use LNG as a low temperature heat sink to produce electric power are analyzed. This includes single-stage and two-stage direct expansion systems, an ORC (Organic Rankine Cycle) system, and a combined system (ORC + direct expansion). The optimization of the above-mentioned LNG power cycles and exergy analyses are also discussed, with the identification of exergy loss in all components. The analyzed systems achieved exergetic efficiencies in the range of 20 % to 36 % , which corresponds to a net work in the range of 214 to 380 kJ/kg L N G .

]]>Entropy doi: 10.3390/e20010056

Authors: Christian Corda Mehdi FatehiNia MohammadReza Molaei Yamin Sayyari

In this paper we consider the metric entropies of the maps of an iterated function system deduced from a black hole which are known the Bekenstein–Hawking entropies and its subleading corrections. More precisely, we consider the recent model of a Bohr-like black hole that has been recently analysed in some papers in the literature, obtaining the intriguing result that the metric entropies of a black hole are created by the metric entropies of the functions, created by the black hole principal quantum numbers, i.e., by the black hole quantum levels. We present a new type of topological entropy for general iterated function systems based on a new kind of the inverse of covers. Then the notion of metric entropy for an Iterated Function System ( I F S ) is considered, and we prove that these definitions for topological entropy of IFS’s are equivalent. It is shown that this kind of topological entropy keeps some properties which are hold by the classic definition of topological entropy for a continuous map. We also consider average entropy as another type of topological entropy for an I F S which is based on the topological entropies of its elements and it is also an invariant object under topological conjugacy. The relation between Axiom A and the average entropy is investigated.

]]>Entropy doi: 10.3390/e20010057

Authors: Raquel Cervigón Francisco Castells José Gómez-Pulido Julián Pérez-Villacastín Javier Moreno

Atrial fibrillation (AF) is already the most commonly occurring arrhythmia. Catheter pulmonary vein ablation has emerged as a treatment that is able to make the arrhythmia disappear; nevertheless, recurrence to arrhythmia is very frequent. In this study, it is proposed to perform an analysis of the electrical signals recorded from bipolar catheters at three locations, pulmonary veins and the right and left atria, before to and during the ablation procedure. Principal Component Analysis (PCA) was applied to reduce data dimension and Granger causality and divergence techniques were applied to analyse connectivity along the atria, in three main regions: pulmonary veins, left atrium (LA) and right atrium (RA). The results showed that, before the procedure, patients with recurrence in the arrhythmia had greater connectivity between atrial areas. Moreover, during the ablation procedure, in patients with recurrence in the arrhythmial both atria were more connected than in patients that maintained sinus rhythms. These results can be helpful for procedures designing to end AF.

]]>Entropy doi: 10.3390/e20010055

Authors: Paulo Hubert Linilson Padovese Julio Stern

The problem of event detection in general noisy signals arises in many applications; usually, either a functional form of the event is available, or a previous annotated sample with instances of the event that can be used to train a classification algorithm. There are situations, however, where neither functional forms nor annotated samples are available; then, it is necessary to apply other strategies to separate and characterize events. In this work, we analyze 15-min samples of an acoustic signal, and are interested in separating sections, or segments, of the signal which are likely to contain significant events. For that, we apply a sequential algorithm with the only assumption that an event alters the energy of the signal. The algorithm is entirely based on Bayesian methods.

]]>Entropy doi: 10.3390/e20010058

Authors: Alicia Sendrowski Kazi Sadid Ehab Meselhe Wayne Wagner David Mohrig Paola Passalacqua

The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, and solute transport through the deltaic network. We propose using transfer entropy (TE) to validate model results. TE quantifies the information transferred between variables in terms of strength, timescale, and direction. Using water level data collected in the distributary channels and inter-channel islands of Wax Lake Delta, Louisiana, USA, along with modeled water level data generated for the same locations using Delft3D, we assess how well couplings between external drivers (river discharge, tides, wind) and modeled water levels reproduce the observed data couplings. We perform this operation through time using ten-day windows. Modeled and observed couplings compare well; their differences reflect the spatial parameterization of wind and roughness in the model, which prevents the model from capturing high frequency fluctuations of water level. The model captures couplings better in channels than on islands, suggesting that mechanisms of channel-island connectivity are not fully represented in the model. Overall, TE serves as an additional validation tool to quantify the couplings of the system of interest at multiple spatial and temporal scales.

]]>Entropy doi: 10.3390/e20010054

Authors: Weiwei Zhang Jinde Cao Dingyuan Chen Fuad Alsaadi

This paper discusses the synchronization of fractional order complex valued neural networks (FOCVNN) at the presence of time delay. Synchronization criterions are achieved through the employment of a linear feedback control and comparison theorem of fractional order linear systems with delay. Feasibility and effectiveness of the proposed system are validated through numerical simulations.

]]>Entropy doi: 10.3390/e20010052

Authors: Karsten Schwalbe Karl Hoffmann

In this article a Novikov engine with fluctuating hot heat bath temperature is presented. Based on this model, the performance measure maximum expected power as well as the corresponding efficiency and entropy production rate is investigated for four different stationary distributions: continuous uniform, normal, triangle, quadratic, and Pareto. It is found that the performance measures increase monotonously with increasing expectation value and increasing standard deviation of the distributions. Additionally, we show that the distribution has only little influence on the performance measures for small standard deviations. For larger values of the standard deviation, the performance measures in the case of the Pareto distribution are significantly different compared to the other distributions. These observations are explained by a comparison of the Taylor expansions in terms of the distributions’ standard deviations. For the considered symmetric distributions, an extension of the well known Curzon–Ahlborn efficiency to a stochastic Novikov engine is given.

]]>Entropy doi: 10.3390/e20010053

Authors: Linyun Huang Youngchul Bae

Based on the fractional order of nonlinear system for love model with a periodic function as an external environment, we analyze the characteristics of the chaotic dynamic. We analyze the relationship between the chaotic dynamic of the fractional order love model with an external environment and the value of fractional order (α, β) when the parameters are fixed. Meanwhile, we also study the relationship between the chaotic dynamic of the fractional order love model with an external environment and the parameters (a, b, c, d) when the fractional order of the system is fixed. When the parameters of fractional order love model are fixed, the fractional order (α, β) of fractional order love model system exhibit segmented chaotic states with the different fractional orders of the system. When the fractional order (α = β) of the system is fixed, the system shows the periodic state and the chaotic state as the parameter is changing as a result.

]]>Entropy doi: 10.3390/e20010050

Authors: Pranab Mondal Somchai Wonwises

We investigate the effect of viscous dissipation on the thermal transport characteristics of heat and its consequence in terms of the entropy-generation rate in a circular Couette flow. We consider the flow of a Newtonian fluid through a narrow annular space between two asymmetrically-heated concentric micro-cylinders where the inner cylinder is rotating at a constant speed. Employing an analytical methodology, we demonstrate the temperature distribution and its consequential effects on the heat-transfer and entropy-generation behaviour in the annulus. We bring out the momentous effect of viscous dissipation on the underlying transport of heat as modulated by the degree of thermal asymmetries. Our results also show that the variation of the Nusselt number exhibits an unbounded swing for some values of the Brinkman number and degrees of asymmetrical wall heating. We explain the appearance of unbounded swing on the variation of the Nusselt number from the energy balance in the flow field as well as from the second law of thermodynamics. We believe that the insights obtained from the present analysis may improve the design of micro-rotating devices/systems.

]]>Entropy doi: 10.3390/e20010044

Authors: François Ternet Hasna Louahlia-Gualous Stéphane Le Masson

Miniature heat pipes are considered to be an innovative solution able to dissipate high heat with low working fluid fill charge, provide automatic temperature control, and operate with minimum energy consumption and low noise levels. A theoretical analysis on heat pipe thermal performance using Deionized water or n-pentane as the working fluid has been carried out. Analysis on the maximum heat and capillary limitation is conducted for three microgroove cross sections: rectangular, triangular, and trapezoidal. The effect of microgroove height and width, effective length, trapezoidal microgroove inclination angle, and microgroove shape on heat pipe performance is analysed. Theoretical and experimental investigations of the heat pipes’ heat transport limitations and thermal resistances are conducted.

]]>Entropy doi: 10.3390/e20010048

Authors: Namyong Kim

Minimization of the Euclidean distance between output distribution and Dirac delta functions as a performance criterion is known to match the distribution of system output with delta functions. In the analysis of the algorithm developed based on that criterion and recursive gradient estimation, it is revealed in this paper that the minimization process of the cost function has two gradients with different functions; one that forces spreading of output samples and the other one that compels output samples to move close to symbol points. For investigation the two functions, each gradient is controlled separately through individual normalization of each gradient with their related input. From the analysis and experimental results, it is verified that one gradient is associated with the role of accelerating initial convergence speed by spreading output samples and the other gradient is related with lowering the minimum mean squared error (MSE) by pulling error samples close together.

]]>Entropy doi: 10.3390/e20010045

Authors: Ignacio Santamaria Pedro Crespo Christian Lameiro Peter Schreier

Non-circular or improper Gaussian signaling has proven beneficial in several interference-limited wireless networks. However, all implementable coding schemes are based on finite discrete constellations rather than Gaussian signals. In this paper, we propose a new family of improper constellations generated by widely linear processing of a square M-QAM (quadrature amplitude modulation) signal. This family of discrete constellations is parameterized by κ , the circularity coefficient and a phase ϕ . For uncoded communication systems, this phase should be optimized as ϕ * ( κ ) to maximize the minimum Euclidean distance between points of the improper constellation, therefore minimizing the bit error rate (BER). For the more relevant case of coded communications, where the coded symbols are constrained to be in this family of improper constellations using ϕ * ( κ ) , it is shown theoretically and further corroborated by simulations that, except for a shaping loss of 1.53 dB encountered at a high signal-to-noise ratio (snr), there is no rate loss with respect to the improper Gaussian capacity. In this sense, the proposed family of constellations can be viewed as the improper counterpart of the standard proper M-QAM constellations widely used in coded communication systems.

]]>Entropy doi: 10.3390/e20010049

Authors: Tzu-Kang Lin Ana Laínez

In this paper, a structural health monitoring (SHM) system based on multi-scale cross-sample entropy (MSCE) is proposed for detecting damage locations in multi-bay three-dimensional structures. The location of damage is evaluated for each bay through MSCE analysis by examining the degree of dissimilarity between the response signals of vertically-adjacent floors. Subsequently, the results are quantified using the damage index (DI). The performance of the proposed SHM system was determined in this study by performing a finite element analysis of a multi-bay seven-story structure. The derived results revealed that the SHM system successfully detected the damaged floors and their respective directions for several cases. The proposed system provides a preliminary assessment of which bay has been more severely affected. Thus, the effectiveness and high potential of the SHM system for locating damage in large and complex structures rapidly and at low cost are demonstrated.

]]>Entropy doi: 10.3390/e20010046

Authors: Pablo Buenestado Leonardo Acho

Image segmentation is defined as a partition realized to an image into homogeneous regions to modify it into something that is more meaningful and softer to examine. Although several segmentation approaches have been proposed recently, in this paper, we develop a new image segmentation method based on the statistical confidence interval tool along with the well-known Otsu algorithm. According to our numerical experiments, our method has a dissimilar performance in comparison to the standard Otsu algorithm to specially process images with speckle noise perturbation. Actually, the effect of the speckle noise entropy is almost filtered out by our algorithm. Furthermore, our approach is validated by employing some image samples.

]]>Entropy doi: 10.3390/e20010033

Authors: Gabriel Martos Nicolás Hernández Alberto Muñoz Javier Moguerza

We propose a definition of entropy for stochastic processes. We provide a reproducing kernel Hilbert space model to estimate entropy from a random sample of realizations of a stochastic process, namely functional data, and introduce two approaches to estimate minimum entropy sets. These sets are relevant to detect anomalous or outlier functional data. A numerical experiment illustrates the performance of the proposed method; in addition, we conduct an analysis of mortality rate curves as an interesting application in a real-data context to explore functional anomaly detection.

]]>Entropy doi: 10.3390/e20010028

Authors: Rosalío Rodríguez Elizabeth Salinas-Rodríguez Jorge Fujioka

We calculate the transverse velocity fluctuations correlation function of a linear and homogeneous viscoelastic liquid by using a generalized Langevin equation (GLE) approach. We consider a long-ranged (power-law) viscoelastic memory and a noise with a long-range (power-law) auto-correlation. We first evaluate the transverse velocity fluctuations correlation function for conventional time derivatives C ^ N F ( k → , t ) and then introduce time fractional derivatives in their equations of motion and calculate the corresponding fractional correlation function. We find that the magnitude of the fractional correlation C ^ F ( k → , t ) is always lower than the non-fractional one and decays more rapidly. The relationship between the fractional loss modulus G F ″ ( ω ) and C ^ F ( k → , t ) is also calculated analytically. The difference between the values of G ″ ( ω ) for two specific viscoelastic fluids is quantified. Our model calculation shows that the fractional effects on this measurable quantity may be three times as large as compared with its non-fractional value. The fact that the dynamic shear modulus is related to the light scattering spectrum suggests that the measurement of this property might be used as a suitable test to assess the effects of temporal fractional derivatives on a measurable property. Finally, we summarize the main results of our approach and emphasize that the eventual validity of our model calculations can only come from experimentation.

]]>Entropy doi: 10.3390/e20010040

Authors: Max Trostel Moses Misplon Andrés Aragoneses Arjendu Pattanayak

The driven double-well Duffing oscillator is a well-studied system that manifests a wide variety of dynamics, from periodic behavior to chaos, and describes a diverse array of physical systems. It has been shown to be relevant in understanding chaos in the classical to quantum transition. Here we explore the complexity of its dynamics in the classical and semi-classical regimes, using the technique of ordinal pattern analysis. This is of particular relevance to potential experiments in the semi-classical regime. We unveil different dynamical regimes within the chaotic range, which cannot be detected with more traditional statistical tools. These regimes are characterized by different hierarchies and probabilities of the ordinal patterns. Correlation between the Lyapunov exponent and the permutation entropy is revealed that leads us to interpret dips in the Lyapunov exponent as transitions in the dynamics of the system.

]]>Entropy doi: 10.3390/e20010037

Authors: Qun Song Simon Fong Suash Deb Thomas Hanne

Nowadays, swarm intelligence algorithms are becoming increasingly popular for solving many optimization problems. The Wolf Search Algorithm (WSA) is a contemporary semi-swarm intelligence algorithm designed to solve complex optimization problems and demonstrated its capability especially for large-scale problems. However, it still inherits a common weakness for other swarm intelligence algorithms: that its performance is heavily dependent on the chosen values of the control parameters. In 2016, we published the Self-Adaptive Wolf Search Algorithm (SAWSA), which offers a simple solution to the adaption problem. As a very simple schema, the original SAWSA adaption is based on random guesses, which is unstable and naive. In this paper, based on the SAWSA, we investigate the WSA search behaviour more deeply. A new parameter-guided updater, the Gaussian-guided parameter control mechanism based on information entropy theory, is proposed as an enhancement of the SAWSA. The heuristic updating function is improved. Simulation experiments for the new method denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA) validate the increased performance of the improved version of WSA in comparison to its standard version and other prevalent swarm algorithms.

]]>Entropy doi: 10.3390/e20010042

Authors: Hellinton Takada Julio Stern Oswaldo Costa Celma Ribeiro

There are several electricity generation technologies based on different sources such as wind, biomass, gas, coal, and so on. The consideration of the uncertainties associated with the future costs of such technologies is crucial for planning purposes. In the literature, the allocation of resources in the available technologies has been solved as a mean-variance optimization problem assuming knowledge of the expected values and the covariance matrix of the costs. However, in practice, they are not exactly known parameters. Consequently, the obtained optimal allocations from the mean-variance optimization are not robust to possible estimation errors of such parameters. Additionally, it is usual to have electricity generation technology specialists participating in the planning processes and, obviously, the consideration of useful prior information based on their previous experience is of utmost importance. The Bayesian models consider not only the uncertainty in the parameters, but also the prior information from the specialists. In this paper, we introduce the classical-equivalent Bayesian mean-variance optimization to solve the electricity generation planning problem using both improper and proper prior distributions for the parameters. In order to illustrate our approach, we present an application comparing the classical-equivalent Bayesian with the naive mean-variance optimal portfolios.

]]>Entropy doi: 10.3390/e20010041

Authors: Emily Adlam

Since the discovery of Bell’s theorem, the physics community has come to take seriously the possibility that the universe might contain physical processes which are spatially nonlocal, but there has been no such revolution with regard to the possibility of temporally nonlocal processes. In this article, we argue that the assumption of temporal locality is actively limiting progress in the field of quantum foundations. We investigate the origins of the assumption, arguing that it has arisen for historical and pragmatic reasons rather than good scientific ones, then explain why temporal locality is in tension with relativity and review some recent results which cast doubt on its validity.

]]>Entropy doi: 10.3390/e20010038

Authors: Tue Vu Ashok Mishra Goutam Konapala

Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to quantify the linear and non-linear relationship between annual streamflow, extreme precipitation indices over Mekong river basin, and ENSO. We primarily used Pearson correlation as a linear association metric for comparison with mutual information. The analysis was performed at four hydro-meteorological stations located on the mainstream Mekong river basin. It was observed that the nonlinear correlation information is comparatively higher between the large-scale climate index and local hydro-meteorology data in comparison to the traditional linear correlation information. The spatial analysis was carried out using all the grid points in the river basin, which suggests a spatial dependence structure between precipitation extremes and ENSO. Overall, this study suggests that mutual information approach can further detect more meaningful connections between large-scale climate indices and hydro-meteorological variables at different spatio-temporal scales. Application of nonlinear mutual information metric can be an efficient tool to better understand hydro-climatic variables dynamics resulting in improved climate-informed adaptation strategies.

]]>Entropy doi: 10.3390/e20010039

Authors: Marc-Olivier Renou Nicolas Gisin Florian Fröwis

Quantum measurements have intrinsic properties that seem incompatible with our everyday-life macroscopic measurements. Macroscopic Quantum Measurement (MQM) is a concept that aims at bridging the gap between well-understood microscopic quantum measurements and macroscopic classical measurements. In this paper, we focus on the task of the polarization direction estimation of a system of N spins 1/2 particles and investigate the model some of us proposed in Barnea et al., 2017. This model is based on a von Neumann pointer measurement, where each spin component of the system is coupled to one of the three spatial component directions of a pointer. It shows traits of a classical measurement for an intermediate coupling strength. We investigate relaxations of the assumptions on the initial knowledge about the state and on the control over the MQM. We show that the model is robust with regard to these relaxations. It performs well for thermal states and a lack of knowledge about the size of the system. Furthermore, a lack of control on the MQM can be compensated by repeated “ultra-weak” measurements.

]]>Entropy doi: 10.3390/e20010034

Authors: Rodrigo Cofré Cesar Maldonado

The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

]]>