Entropy doi: 10.3390/e20080613

Authors: Quentin Jacquet Eun-jin Kim Rainer Hollerbach

We report the time-evolution of Probability Density Functions (PDFs) in a toy model of self-organised shear flows, where the formation of shear flows is induced by a finite memory time of a stochastic forcing, manifested by the emergence of a bimodal PDF with the two peaks representing non-zero mean values of a shear flow. Using theoretical analyses of limiting cases, as well as numerical solutions of the full Fokker&ndash;Planck equation, we present a thorough parameter study of PDFs for different values of the correlation time and amplitude of stochastic forcing. From time-dependent PDFs, we calculate the information length ( L ), which is the total number of statistically different states that a system passes through in time and utilise it to understand the information geometry associated with the formation of bimodal or unimodal PDFs. We identify the difference between the relaxation and build-up of the shear gradient in view of information change and discuss the total information length ( L &infin; = L ( t &rarr; &infin; ) ) which maps out the underlying attractor structures, highlighting a unique property of L &infin; which depends on the trajectory/history of a PDF&rsquo;s evolution.

]]>Entropy doi: 10.3390/e20080612

Authors: Mei Tao Kristina Poskuviene Nizar Faisal Alkayem Maosen Cao Minvydas Ragulskis

A novel visualization scheme for permutation entropy is presented in this paper. The proposed scheme is based on non-uniform attractor embedding of the investigated time series. A single digital image of permutation entropy is produced by averaging all possible plain projections of the permutation entropy measure in the multi-dimensional delay coordinate space. Computational experiments with artificially-generated and real-world time series are used to demonstrate the advantages of the proposed visualization scheme.

]]>Entropy doi: 10.3390/e20080611

Authors: Fuhe Yang Xingquan Shen Zhijian Wang

Under complicated conditions, the extraction of a multi-fault in gearboxes is difficult to achieve. Due to improper selection of methods, leakage diagnosis or misdiagnosis will usually occur. Ensemble Empirical Mode Decomposition (EEMD) often causes energy leakage due to improper selection of white noise during signal decomposition. Considering that only a single fault cycle can be extracted when MOMED (Multipoint Optimal Minimum Entropy Deconvolution) is used, it is necessary to perform the sub-band processing of the compound fault signal. This paper presents an adaptive gearbox multi-fault-feature extraction method based on Improved MOMED (IMOMED). Firstly, EEMD decomposes the signal adaptively and selects the intrinsic mode functions with strong correlation with the original signal to perform FFT (Fast Fourier transform); considering the mode-mixing phenomenon of EEMD, reconstruct the intrinsic mode functions with the same timescale, and obtain several intrinsic mode functions of the same scale to improve the entropy of fault features. There is a lot of white noise in the original signal, and EEMD can improve the signal-to-noise ratio of the original signal. Finally, through the setting of different noise-reduction intervals to extract fault features through MOMED. The proposed method is compared with EEMD and VMD (Variational Mode Decomposition) to verify its feasibility.

]]>Entropy doi: 10.3390/e20080610

Authors: Francisco Delgado

The gate array version of quantum computation uses logical gates adopting convenient forms for computational algorithms based on the algorithms classical computation. Two-level quantum systems are the basic elements connecting the binary nature of classical computation with the settlement of quantum processing. Despite this, their design depends on specific quantum systems and the physical interactions involved, thus complicating the dynamics analysis. Predictable and controllable manipulation should be addressed in order to control the quantum states in terms of the physical control parameters. Resources are restricted to limitations imposed by the physical settlement. This work presents a formalism to decompose the quantum information dynamics in S U ( 2 2 d ) for 2 d -partite two-level systems into 2 2 d &minus; 1 S U ( 2 ) quantum subsystems. It generates an easier and more direct physical implementation of quantum processing developments for qubits. Easy and traditional operations proposed by quantum computation are recovered for larger and more complex systems. Alternating the parameters of local and non-local interactions, the procedure states a universal exchange semantics on the basis of generalized Bell states. Although the main procedure could still be settled on other interaction architectures by the proper selection of the basis as natural grammar, the procedure can be understood as a momentary splitting of the 2 d information channels into 2 2 d &minus; 1 pairs of 2 level quantum information subsystems. Additionally, it is a settlement of the quantum information manipulation that is free of the restrictions imposed by the underlying physical system. Thus, the motivation of decomposition is to set control procedures easily in order to generate large entangled states and to design specialized dedicated quantum gates. They are potential applications that properly bypass the general induced superposition generated by physical dynamics.

]]>Entropy doi: 10.3390/e20080609

Authors: Akio Fujiwara Koichi Yamagata

Suppose that a d-dimensional Hilbert space H ≃ C d admits a full set of mutually unbiased bases | 1 ( a ) 〉 , ⋯ , | d ( a ) 〉 , where a = 1 , ⋯ , d + 1 . A randomized quantum state tomography is a scheme for estimating an unknown quantum state on H through iterative applications of measurements M ( a ) = | 1 ( a ) 〉 〈 1 ( a ) | , ⋯ , | d ( a ) 〉 〈 d ( a ) | for a = 1 , ⋯ , d + 1 , where the numbers of applications of these measurements are random variables. We show that the space of the resulting probability distributions enjoys a mutually orthogonal dualistic foliation structure, which provides us with a simple geometrical insight into the maximum likelihood method for the quantum state tomography.

]]>Entropy doi: 10.3390/e20080608

Authors: Muhammad Khan Zaid Al-sahwi Yu-Ming Chu

The main purpose of this paper is to find new estimations for the Shannon and Zipf–Mandelbrot entropies. We apply some refinements of the Jensen inequality to obtain different bounds for these entropies. Initially, we use a precise convex function in the refinement of the Jensen inequality and then tamper the weight and domain of the function to obtain general bounds for the Shannon entropy (SE). As particular cases of these general bounds, we derive some bounds for the Shannon entropy (SE) which are, in fact, the applications of some other well-known refinements of the Jensen inequality. Finally, we derive different estimations for the Zipf–Mandelbrot entropy (ZME) by using the new bounds of the Shannon entropy for the Zipf–Mandelbrot law (ZML). We also discuss particular cases and the bounds related to two different parametrics of the Zipf–Mandelbrot entropy. At the end of the paper we give some applications in linguistics.

]]>Entropy doi: 10.3390/e20080607

Authors: Bahaaudin Mohammadnoor Raffah Kamal Berrada

We develop a useful model considering an atom-field system interaction in the framework of pseudoharmonic oscillators. We examine qualitatively the different physical quantities for a two-level atom (TLA) system interacting with a quantized coherent field in the context of photon-added coherent states of pseudoharmonic oscillators. Using these coherent states, we solve the model that exhibits the interaction between the TLA and field associated with these kinds of potentials. We analyze the temporal evolution of the entanglement, statistical properties, geometric phase and squeezing entropies. Finally, we show the relationship between the physical quantities and their dynamics in terms of the physical parameters.

]]>Entropy doi: 10.3390/e20080606

Authors: G. Paraoanu

The pigeonhole principle upholds the idea that by ascribing to three different particles either one of two properties, we necessarily end up in a situation when at least two of the particles have the same property. In quantum physics, this principle is violated in experiments involving postselection of the particles in appropriately-chosen states. Here, we give two explicit constructions using standard gates and measurements that illustrate this fact. Intriguingly, the procedures described are manifestly non-local, which demonstrates that the correlations needed to observe the violation of this principle can be created without direct interactions between particles.

]]>Entropy doi: 10.3390/e20080605

Authors: Hector Zenil Santiago Hernández-Orozco Narsis Kiani Fernando Soler-Toscano Antonio Rueda-Toicen Jesper Tegnér

We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.

]]>Entropy doi: 10.3390/e20080604

Authors: Wei He Yigang He Bing Li Chaolong Zhang

In this paper, a novel method with cross-wavelet singular entropy (XWSE)-based feature extractor and support vector machine (SVM) is proposed for analog circuit fault diagnosis. Primarily, cross-wavelet transform (XWT), which possesses a good capability to restrain the environment noise, is applied to transform the fault signal into time-frequency spectra (TFS). Then, a simple segmentation method is utilized to decompose the TFS into several blocks. We employ the singular value decomposition (SVD) to analysis the blocks, then Tsallis entropy of each block is obtained to construct the original features. Subsequently, the features are imported into parametric t-distributed stochastic neighbor embedding (t-SNE) for dimension reduction to yield the discriminative and concise fault characteristics. Finally, the fault characteristics are entered into SVM classifier to locate circuits&rsquo; defects that the free parameters of SVM are determined by quantum-behaved particle swarm optimization (QPSO). Simulation results show the proposed approach is with superior diagnostic performance than other existing methods.

]]>Entropy doi: 10.3390/e20080603

Authors: Chao Tian

We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space serves as the starting point of this approach; however, our effort goes significantly beyond using it to prove information inequalities. We first identify and formalize the symmetry structure in the problem, which enables us to show the existence of optimal symmetric solutions. A symmetry-reduced linear program is then used to identify the boundary of the memory-transmission-rate tradeoff for several small cases, for which we obtain a set of tight outer bounds. General hypotheses on the optimal tradeoff region are formed from these computed data, which are then analytically proven. This leads to a complete characterization of the optimal tradeoff for systems with only two users, and certain partial characterization for systems with only two files. Next, we show that by carefully analyzing the joint entropy structure of the outer bounds for certain cases, a novel code construction can be reverse-engineered, which eventually leads to a general class of codes. Finally, we show that outer bounds can be computed through strategically relaxing the LP in different ways, which can be used to explore the problem computationally. This allows us firstly to deduce generic characteristic of the converse proof, and secondly to compute outer bounds for larger problem cases, despite the seemingly impossible computation scale.

]]>Entropy doi: 10.3390/e20080602

Authors: Xiaolong Zhu Jinde Zheng Haiyang Pan Jiahan Bao Yifang Zhang

Multiscale entropy (MSE), as a complexity measurement method of time series, has been widely used to extract the fault information hidden in machinery vibration signals. However, the insufficient coarse graining in MSE will result in fault pattern information missing and the sample entropy used in MSE at larger factors will fluctuate heavily. Combining fractal theory and fuzzy entropy, the time shift multiscale fuzzy entropy (TSMFE) is put forward and applied to the complexity analysis of time series for enhancing the performance of MSE. Then TSMFE is used to extract the nonlinear fault features from vibration signals of rolling bearing. By combining TSMFE with the Laplacian support vector machine (LapSVM), which only needs very few marked samples for classification training, a new intelligent fault diagnosis method for rolling bearing is proposed. Also the proposed method is applied to the experiment data analysis of rolling bearing by comparing with the existing methods and the analysis results show that the proposed fault diagnosis method can effectively identify different states of rolling bearing and get the highest recognition rate among the existing methods.

]]>Entropy doi: 10.3390/e20080601

Authors: Paul Darscheid Anneli Guthke Uwe Ehret

When constructing discrete (binned) distributions from samples of a data set, applications exist where it is desirable to assure that all bins of the sample distribution have nonzero probability. For example, if the sample distribution is part of a predictive model for which we require returning a response for the entire codomain, or if we use Kullback&ndash;Leibler divergence to measure the (dis-)agreement of the sample distribution and the original distribution of the variable, which, in the described case, is inconveniently infinite. Several sample-based distribution estimators exist which assure nonzero bin probability, such as adding one counter to each zero-probability bin of the sample histogram, adding a small probability to the sample pdf, smoothing methods such as Kernel-density smoothing, or Bayesian approaches based on the Dirichlet and Multinomial distribution. Here, we suggest and test an approach based on the Clopper&ndash;Pearson method, which makes use of the binominal distribution. Based on the sample distribution, confidence intervals for bin-occupation probability are calculated. The mean of each confidence interval is a strictly positive estimator of the true bin-occupation probability and is convergent with increasing sample size. For small samples, it converges towards a uniform distribution, i.e., the method effectively applies a maximum entropy approach. We apply this nonzero method and four alternative sample-based distribution estimators to a range of typical distributions (uniform, Dirac, normal, multimodal, and irregular) and measure the effect with Kullback&ndash;Leibler divergence. While the performance of each method strongly depends on the distribution type it is applied to, on average, and especially for small sample sizes, the nonzero, the simple &ldquo;add one counter&rdquo;, and the Bayesian Dirichlet-multinomial model show very similar behavior and perform best. We conclude that, when estimating distributions without an a priori idea of their shape, applying one of these methods is favorable.

]]>Entropy doi: 10.3390/e20080600

Authors: Lei Lei Kun She

Recently, the accuracy of voice authentication system has increased significantly due to the successful application of the identity vector (i-vector) model. This paper proposes a new method for i-vector extraction. In the method, a perceptual wavelet packet transform (PWPT) is designed to convert speech utterances into wavelet entropy feature vectors, and a Convolutional Neural Network (CNN) is designed to estimate the frame posteriors of the wavelet entropy feature vectors. In the end, i-vector is extracted based on those frame posteriors. TIMIT and VoxCeleb speech corpus are used for experiments and the experimental results show that the proposed method can extract appropriate i-vector which reduces the equal error rate (EER) and improve the accuracy of voice authentication system in clean and noisy environment.

]]>Entropy doi: 10.3390/e20080599

Authors: Sarah Marzen

Causal states are minimal sufficient statistics of prediction of a stochastic process, their coding cost is called statistical complexity, and the implied causal structure yields a sense of the process&rsquo; &ldquo;intrinsic computation&rdquo;. We discuss how statistical complexity changes with slight changes to the underlying model&ndash; in this case, a biologically-motivated dynamical model, that of a Monod-Wyman-Changeux molecule. Perturbations to kinetic rates cause statistical complexity to jump from finite to infinite. The same is not true for excess entropy, the mutual information between past and future, or for the molecule&rsquo;s transfer function. We discuss the implications of this for the relationship between intrinsic and functional computation of biological sensory systems.

]]>Entropy doi: 10.3390/e20080598

Authors: Huijuan Cui Bellie Sivakumar Vijay Singh

n/a

]]>Entropy doi: 10.3390/e20080597

Authors: Pouria Ahmadi Behnaz Rezaie

n/a

]]>Entropy doi: 10.3390/e20080596

Authors: D. R. Michiel Renger

In a previous work we devised a framework to derive generalised gradient systems for an evolution equation from the large deviations of an underlying microscopic system, in the spirit of the Onsager&ndash;Machlup relations. Of particular interest is the case where the microscopic system consists of random particles, and the macroscopic quantity is the empirical measure or concentration. In this work we take the particle flux as the macroscopic quantity, which is related to the concentration via a continuity equation. By a similar argument the large deviations can induce a generalised gradient or GENERIC system in the space of fluxes. In a general setting we study how flux gradient or GENERIC systems are related to gradient systems of concentrations. This shows that many gradient or GENERIC systems arise from an underlying gradient or GENERIC system where fluxes rather than densities are being driven by (free) energies. The arguments are explained by the example of reacting particle systems, which is later expanded to include spatial diffusion as well.

]]>Entropy doi: 10.3390/e20080595

Authors: Niccolò Giannetti Seiichi Yamaguchi Andrea Rocchetti Kiyoshi Saito

A new general thermodynamic mapping of desiccant systems&rsquo; performance is conducted to estimate the potentiality and determine the proper application field of the technology. This targets certain room conditions and given outdoor temperature and humidity prior to the selection of the specific desiccant material and technical details of the system configuration. This allows the choice of the operative state of the system to be independent from the limitations of the specific design and working fluid. An expression of the entropy balance suitable for describing the operability of a desiccant system at steady state is obtained by applying a control volume approach, defining sensible and latent effectiveness parameters, and assuming ideal gas behaviour of the air-vapour mixture. This formulation, together with mass and energy balances, is used to conduct a general screening of the system performance. The theoretical advantage and limitation of desiccant dehumidification air conditioning, maximum efficiency for given conditions constraints, least irreversible configuration for a given operative target, and characteristics of the system for a target efficiency can be obtained from this thermodynamic mapping. Once the thermo-physical properties and the thermodynamic equilibrium relationship of the liquid desiccant mixture or solid coating material are known, this method can be applied to a specific technical case to select the most appropriate working medium and guide the specific system design to achieve the target performance.

]]>Entropy doi: 10.3390/e20080594

Authors: Liang Yan Xiaojun Duan Bowen Liu Jin Xu

Bayesian optimization (BO) based on the Gaussian process (GP) surrogate model has attracted extensive attention in the field of optimization and design of experiments (DoE). It usually faces two problems: the unstable GP prediction due to the ill-conditioned Gram matrix of the kernel and the difficulty of determining the trade-off parameter between exploitation and exploration. To solve these problems, we investigate the K-optimality, aiming at minimizing the condition number. Firstly, the Sequentially Bayesian K-optimal design (SBKO) is proposed to ensure the stability of the GP prediction, where the K-optimality is given as the acquisition function. We show that the SBKO reduces the integrated posterior variance and maximizes the hyper-parameters&rsquo; information gain simultaneously. Secondly, a K-optimal enhanced Bayesian Optimization (KO-BO) approach is given for the optimization problems, where the K-optimality is used to define the trade-off balance parameters which can be output automatically. Specifically, we focus our study on the K-optimal enhanced Expected Improvement algorithm (KO-EI). Numerical examples show that the SBKO generally outperforms the Monte Carlo, Latin hypercube sampling, and sequential DoE approaches by maximizing the posterior variance with the highest precision of prediction. Furthermore, the study of the optimization problem shows that the KO-EI method beats the classical EI method due to its higher convergence rate and smaller variance.

]]>Entropy doi: 10.3390/e20080593

Authors: Frank Lad Giuseppe Sanfilippo Gianna Agrò

The refinement axiom for entropy has been provocative in providing foundations of information theory, recognised as thoughtworthy in the writings of both Shannon and Jaynes. A resolution to their concerns has been provided recently by the discovery that the entropy measure of a probability distribution has a dual measure, a complementary companion designated as &ldquo;extropy&rdquo;. We report here the main results that identify this fact, specifying the dual equations and exhibiting some of their structure. The duality extends beyond a simple assessment of entropy, to the formulation of relative entropy and the Kullback symmetric distance between two forecasting distributions. This is defined by the sum of a pair of directed divergences. Examining the defining equation, we notice that this symmetric measure can be generated by two other explicable pairs of functions as well, neither of which is a Bregman divergence. The Kullback information complex is constituted by the symmetric measure of entropy/extropy along with one of each of these three function pairs. It is intimately related to the total logarithmic score of two distinct forecasting distributions for a quantity under consideration, this being a complete proper score. The information complex is isomorphic to the expectations that the two forecasting distributions assess for their achieved scores, each for its own score and for the score achieved by the other. Analysis of the scoring problem exposes a Pareto optimal exchange of the forecasters&rsquo; scores that both are willing to engage. Both would support its evaluation for assessing the relative quality of the information they provide regarding the observation of an unknown quantity of interest. We present our results without proofs, as these appear in source articles that are referenced. The focus here is on their content, unhindered. The mathematical syntax of probability we employ relies upon the operational subjective constructions of Bruno de Finetti.

]]>Entropy doi: 10.3390/e20080592

Authors: Duco Veen Diederick Stoel Naomi Schalken Kees Mulder Rens van de Schoot

Experts&rsquo; beliefs embody a present state of knowledge. It is desirable to take this knowledge into account when making decisions. However, ranking experts based on the merit of their beliefs is a difficult task. In this paper, we show how experts can be ranked based on their knowledge and their level of (un)certainty. By letting experts specify their knowledge in the form of a probability distribution, we can assess how accurately they can predict new data, and how appropriate their level of (un)certainty is. The expert&rsquo;s specified probability distribution can be seen as a prior in a Bayesian statistical setting. We evaluate these priors by extending an existing prior-data (dis)agreement measure, the Data Agreement Criterion, and compare this approach to using Bayes factors to assess prior specification. We compare experts with each other and the data to evaluate their appropriateness. Using this method, new research questions can be asked and answered, for instance: Which expert predicts the new data best? Is there agreement between my experts and the data? Which experts&rsquo; representation is more valid or useful? Can we reach convergence between expert judgement and data? We provided an empirical example ranking (regional) directors of a large financial institution based on their predictions of turnover.

]]>Entropy doi: 10.3390/e20080591

Authors: Beatriz Chicote Unai Irusta Elisabete Aramendi Raúl Alcaraz José Joaquín Rieta Iraia Isasi Daniel Alonso María del Mar Baqueriza Karlos Ibarguren

Optimal defibrillation timing guided by ventricular fibrillation (VF) waveform analysis would contribute to improved survival of out-of-hospital cardiac arrest (OHCA) patients by minimizing myocardial damage caused by futile defibrillation shocks and minimizing interruptions to cardiopulmonary resuscitation. Recently, fuzzy entropy (FuzzyEn) tailored to jointly measure VF amplitude and regularity has been shown to be an efficient defibrillation success predictor. In this study, 734 shocks from 296 OHCA patients (50 survivors) were analyzed, and the embedding dimension (m) and matching tolerance (r) for FuzzyEn and sample entropy (SampEn) were adjusted to predict defibrillation success and patient survival. Entropies were significantly larger in successful shocks and in survivors, and when compared to the available methods, FuzzyEn presented the best prediction results, marginally outperforming SampEn. The sensitivity and specificity of FuzzyEn were 83.3% and 76.7% when predicting defibrillation success, and 83.7% and 73.5% for patient survival. Sensitivities and specificities were two points above those of the best available methods, and the prediction accuracy was kept even for VF intervals as short as 2s. These results suggest that FuzzyEn and SampEn may be promising tools for optimizing the defibrillation time and predicting patient survival in OHCA patients presenting VF.

]]>Entropy doi: 10.3390/e20080590

Authors: Khaled Daqrouq Mohammed Ajour

In this paper, we investigated the modeling of the pathological features of the influenza disease on the human speech. The presented work is novel research based on a real database and a new combination of previously used methods, discrete wavelet transform (DWT) and linear prediction coding (LPC). Three verification system experiments, Normal/Influenza, Smokers/Influenza, and Normal/Smokers, were studied. For testing the proposed pathological system, several classification scores were calculated for the recorded database, from which we can see that the proposed method achieved very high scores, particularly for the Normal with Influenza verification system. The performance of the proposed system was also compared with other published recognition systems. The experiments of these schemes show that the proposed method is superior.

]]>Entropy doi: 10.3390/e20080589

Authors: Dagmar Markechová Beloslav Riečan

This paper is concerned with the mathematical modelling of Tsallis entropy in product MV-algebra dynamical systems. We define the Tsallis entropy of order α, where α∈(0, 1)∪(1, ∞), of a partition in a product MV-algebra and its conditional version and we examine their properties. Among other, it is shown that the Tsallis entropy of order α, where α &gt; 1, has the property of sub-additivity. This property allows us to define, for α &gt; 1, the Tsallis entropy of a product MV-algebra dynamical system. It is proven that the proposed entropy measure is invariant under isomorphism of product MV-algebra dynamical systems.

]]>Entropy doi: 10.3390/e20080588

Authors: Yi Shen Lin Chen

We investigate the distillability problem in quantum information in ℂ d &otimes; ℂ d . One case of the problem has been reduced to proving a matrix inequality when d = 4 . We investigate the inequality for three families of non-normal matrices. We prove the inequality for the first two families with d = 4 and for the third family with d &ge; 5 . We also present a sufficient condition for the fulfillment of the inequality with d = 4 .

]]>Entropy doi: 10.3390/e20080587

Authors: Dagmar Markechová Beloslav Riečan

This article deals with new concepts in a product MV-algebra, namely, with the concepts of R&eacute;nyi entropy and R&eacute;nyi divergence. We define the R&eacute;nyi entropy of order q of a partition in a product MV-algebra and its conditional version and we study their properties. It is shown that the proposed concepts are consistent, in the case of the limit of q going to 1, with the Shannon entropy of partitions in a product MV-algebra defined and studied by Petrovičov&aacute; (Soft Comput. 2000, 4, 41&ndash;44). Moreover, we introduce and study the notion of R&eacute;nyi divergence in a product MV-algebra. It is proven that the Kullback&ndash;Leibler divergence of states on a given product MV-algebra introduced by Markechov&aacute; and Riečan in (Entropy 2017, 19, 267) can be obtained as the limit of their R&eacute;nyi divergence. In addition, the relationship between the R&eacute;nyi entropy and the R&eacute;nyi divergence as well as the relationship between the R&eacute;nyi divergence and Kullback&ndash;Leibler divergence in a product MV-algebra are examined.

]]>Entropy doi: 10.3390/e20080586

Authors: Xin Wang Yi Zhang Kai Lu Xiaoping Wang Kai Liu

The isomorphism problem involves judging whether two graphs are topologically the same and producing structure-preserving isomorphism mapping. It is widely used in various areas. Diverse algorithms have been proposed to solve this problem in polynomial time, with the help of quantum walks. Some of these algorithms, however, fail to find the isomorphism mapping. Moreover, most algorithms have very limited performance on regular graphs which are generally difficult to deal with due to their symmetry. We propose IsoMarking to discover an isomorphism mapping effectively, based on the quantum walk which is sensitive to topological structures. Firstly, IsoMarking marks vertices so that it can reduce the harmful influence of symmetry. Secondly, IsoMarking can ascertain whether the current candidate bijection is consistent with existing bijections and eventually obtains qualified mapping. Thirdly, our experiments on 1585 pairs of graphs demonstrate that our algorithm performs significantly better on both ordinary graphs and regular graphs.

]]>Entropy doi: 10.3390/e20080585

Authors: Boris Kryzhanovsky Magomed Malsagov Iakov Karandashev

We analyze changes in the thermodynamic properties of a spin system when it passes from the classical two-dimensional Ising model to the spin glass model, where spin-spin interactions are random in their values and signs. Formally, the transition reduces to a gradual change in the amplitude of the multiplicative noise (distributed uniformly with a mean equal to one) superimposed over the initial Ising matrix of interacting spins. Considering the noise, we obtain analytical expressions that are valid for lattices of finite sizes. We compare our results with the results of computer simulations performed for square N = L &times; L lattices with linear dimensions L = 50 &divide; 1000. We find experimentally the dependencies of the critical values (the critical temperature, the internal energy, entropy and the specific heat) as well as the dependencies of the energy of the ground state and its magnetization on the amplitude of the noise. We show that when the variance of the noise reaches one, there is a jump of the ground state from the fully correlated state to an uncorrelated state and its magnetization jumps from 1 to 0. In the same time, a phase transition that is present at a lower level of the noise disappears.

]]>Entropy doi: 10.3390/e20080584

Authors: Milivoje M. Kostic

The nature of thermal phenomena is still elusive and sometimes misconstrued. Starting from Lavoisier, who presumed that caloric as a weightless substance is conserved, to Sadi Carnot who erroneously assumed that work is extracted while caloric is conserved, to modern day researchers who argue that thermal energy is an indistinguishable part of internal energy, to the generalization of entropy and challengers of the Second Law of thermodynamics, the relevant thermal concepts are critically discussed here. Original reflections about the nature of thermo-mechanical energy transfer, classical and generalized entropy, exergy, and new entransy concept are reasoned and put in historical and contemporary contexts, with the objective of promoting further constructive debates and hopefully resolve some critical issues within the subtle thermal landscape.

]]>Entropy doi: 10.3390/e20080583

Authors: Song Cheng Jing Chen Lei Wang

We compare and contrast the statistical physics and quantum physics inspired approaches for unsupervised generative modeling of classical data. The two approaches represent probabilities of observed data using energy-based models and quantum states, respectively. Classical and quantum information patterns of the target datasets therefore provide principled guidelines for structural design and learning in these two approaches. Taking the Restricted Boltzmann Machines (RBM) as an example, we analyze the information theoretical bounds of the two approaches. We also estimate the classical mutual information of the standard MNIST datasets and the quantum R&eacute;nyi entropy of corresponding Matrix Product States (MPS) representations. Both information measures are much smaller compared to their theoretical upper bound and exhibit similar patterns, which imply a common inductive bias of low information complexity. By comparing the performance of RBM with various architectures on the standard MNIST datasets, we found that the RBM with local sparse connection exhibit high learning efficiency, which supports the application of tensor network states in machine learning problems.

]]>Entropy doi: 10.3390/e20080582

Authors: Hui Yang Yikun Wei Zuchao Zhu Huashu Dou Yuehong Qian

Statistics of heat transfer in two-dimensional (2D) turbulent Rayleigh-B&eacute;nard (RB) convection for Pr=6,20,100 and 106 are investigated using the lattice Boltzmann method (LBM). Our results reveal that the large scale circulation is gradually broken up into small scale structures plumes with the increase of Pr, the large scale circulation disappears with increasing Pr, and a great deal of smaller thermal plumes vertically rise and fall from the bottom to top walls. It is further indicated that vertical motion of various plumes gradually plays main role with increasing Pr. In addition, our analysis also shows that the thermal dissipation is distributed mainly in the position of high temperature gradient, the thermal dissipation rate &epsilon;&theta; already increasingly plays a dominant position in the thermal transport, &epsilon;u can have no effect with increase of Pr. The kinematic viscosity dissipation rate and the thermal dissipation rate gradually decrease with increasing Pr. The energy spectrum significantly decreases with the increase of Pr. A scope of linear scaling arises in the second order velocity structure functions, the temperature structure function and mixed structure function(temperature-velocity). The value of linear scaling and the 2nd-order velocity decrease with increasing Pr, which is qualitatively consistent with the theoretical predictions.

]]>Entropy doi: 10.3390/e20080581

Authors: Taha Rajeh Ping Tu Hua Lin Houlei Zhang

A single-leaf type paddle heat exchanger with molten salt as the working fluid is a proper option in high temperature heating processes of materials. In this paper, based on computational fluid dynamics (CFD) simulations, we present the thermo-fluid characteristics of high temperature molten salt flowing in single-leaf type hollow paddles in the view of both the first law and the second law of thermodynamics. The results show that the heat transfer rate of the hollow paddles is significantly greater than that of solid paddles. The penalty of the heat transfer enhancement is additional pressure drop and larger total irreversibility (i.e., total entropy generation rate). Increasing the volume of the fluid space helps to enhance the heat transfer, but there exists an upper limit. Hollow paddles are more favorable in heat transfer enhancement for designs with a larger height of the paddles, flow rate of molten salt and material-side heat transfer coefficient. The diameter of the flow holes influences the pressure drop strongly, but their position is not important for heat transfer in the studied range. Other measures of modifying the fluid flow and heat transfer like internal baffles, more flow holes or multiple channels for small fluid volume are further discussed. For few baffles, their effects are limited. More flow holes reduce the pressure drop obviously. For the hollow paddles with small fluid volume, it is possible to increase the heat transfer rate with more fluid channels. The trade-off among fluid flow, heat transfer and mechanical strength is necessary. The thermo-fluid characteristics revealed in this paper will provide guidance for practical designs.

]]>Entropy doi: 10.3390/e20080580

Authors: Martin Goethe Ignacio Fita J. Miguel Rubi

Popcoen is a method for configurational entropy estimation of proteins based on machine-learning. Entropy is predicted with an artificial neural network which was trained on simulation trajectories of a large set of representative proteins. Popcoen is extremely fast compared to other approaches based on the sampling of a multitude of microstates. Consequently, Popcoen can be incorporated into a large class of protein software which currently neglects configurational entropy for performance reasons. Here, we apply Popcoen to various conformations of the Cas4 protein SSO0001 of Sulfolobus solfataricus, a protein that assembles to a decamer of known toroidal shape. We provide numerical evidence that the native state (NAT) of a SSO0001 monomer has a similar structure to the protomers of the oligomer, where NAT of the monomer is stabilized mainly entropically. Due to its large amount of configurational entropy, NAT has lower free energy than alternative conformations of very low enthalpy and solvation free-energy. Hence, SSO0001 serves as an example case where neglecting configurational entropy leads to incorrect conclusion. Our results imply that no refolding of the subunits is required during oligomerization which suggests that configurational entropy is employed by nature to largely enhance the rate of assembly.

]]>Entropy doi: 10.3390/e20080579

Authors: Samira Ahmadi Nariman Sepehri Christine Wu Tony Szturm

Sample entropy (SampEn) has been used to quantify the regularity or predictability of human gait signals. There are studies on the appropriate use of this measure for inter-stride spatio-temporal gait variables. However, the sensitivity of this measure to preprocessing of the signal and to variant values of template size (m), tolerance size (r), and sampling rate has not been studied when applied to &ldquo;whole&rdquo; gait signals. Whole gait signals are the entire time series data obtained from force or inertial sensors. This study systematically investigates the sensitivity of SampEn of the center of pressure displacement in the mediolateral direction (ML COP-D) to variant parameter values and two pre-processing methods. These two methods are filtering the high-frequency components and resampling the signals to have the same average number of data points per stride. The discriminatory ability of SampEn is studied by comparing treadmill walk only (WO) to dual-task (DT) condition. The results suggest that SampEn maintains the directional difference between two walking conditions across variant parameter values, showing a significant increase from WO to DT condition, especially when signals are low-pass filtered. Moreover, when gait speed is different between test conditions, signals should be low-pass filtered and resampled to have the same average number of data points per stride.

]]>Entropy doi: 10.3390/e20080578

Authors: Hai Zhong Yijun Wang Xudong Wang Qin Liao Xiaodong Wu Ying Guo

The scheme of the self-referenced continuous-variable quantum key distribution (SR CV-QKD) has been experimentally demonstrated. However, because of the finite dynamics of Alice&rsquo;s amplitude modulator, there will be an extra excess noise that is proportional to the amplitude of the reference pulse, while the maximal transmission distance of this scheme is positively correlated with the amplitude of the reference pulse. Therefore, there is a trade-off between the maximal transmission distance and the amplitude of the reference pulse. In this paper, we propose the scheme of SR CV-QKD with virtual photon subtraction, which not only has no need for the use of a high intensity reference pulse to improve the maximal transmission distance, but also has no demand of adding complex physical operations to the original self-referenced scheme. Compared to the original scheme, our simulation results show that a considerable extension of the maximal transmission distance can be obtained when using a weak reference pulse, especially for one-photon subtraction. We also find that our scheme is sensible with the detector&rsquo;s electronic noise at reception. A longer maximal transmission distance can be achieved for lower electronic noise. Moreover, our scheme has a better toleration of excess noise compared to the original self-referenced scheme, which implies the advantage of using virtual photon subtraction to increase the maximal tolerable excess noise for distant users. These results suggest that our scheme can make the SR CV-QKD from the laboratory possible for practical metropolitan area application.

]]>Entropy doi: 10.3390/e20080577

Authors: Xiaojun Lu Jiaojuan Wang Xiang Li Mei Yang Xiangde Zhang

With the rapid development of information storage technology and the spread of the Internet, large capacity image databases that contain different contents in the images are generated. It becomes imperative to establish an automatic and efficient image retrieval system. This paper proposes a novel adaptive weighting method based on entropy theory and relevance feedback. Firstly, we obtain single feature trust by relevance feedback (supervised) or entropy (unsupervised). Then, we construct a transfer matrix based on trust. Finally, based on the transfer matrix, we get the weight of single feature through several iterations. It has three outstanding advantages: (1) The retrieval system combines the performance of multiple features and has better retrieval accuracy and generalization ability than single feature retrieval system; (2) In each query, the weight of a single feature is updated dynamically with the query image, which makes the retrieval system make full use of the performance of several single features; (3) The method can be applied in two cases: supervised and unsupervised. The experimental results show that our method significantly outperforms the previous approaches. The top 20 retrieval accuracy is 97.09%, 92.85%, and 94.42% on the dataset of Wang, UC Merced Land Use, and RSSCN7, respectively. The Mean Average Precision is 88.45% on the dataset of Holidays.

]]>Entropy doi: 10.3390/e20080576

Authors: Do Guen Yoo Dong Eil Chang Yang Ho Song Jung Ho Lee

This study proposed a pressure driven entropy method (PDEM) that determines a priority order of pressure gauge locations, which enables the impact of abnormal condition (e.g., pipe failures) to be quantitatively identified in water distribution networks (WDNs). The method developed utilizes the entropy method from information theory and pressure driven analysis (PDA), which is the latest hydraulic analysis method. The conventional hydraulic approach has problems in determining the locations of pressure gauges, attributable to unrealistic results under abnormal conditions (e.g., negative pressure). The proposed method was applied to two benchmark pipe networks and one real pipe network. The priority order for optimal locations was produced, and the result was compared to existing approach. The results of the conventional method show that the pressure reduction difference of each node became so excessive, which resulted in a distorted distribution. However, with the method developed, which considers the connectivity of a system and the influence among nodes based on PDA and entropy method results, pressure gauges can be more realistically and reasonably located.

]]>Entropy doi: 10.3390/e20080575

Authors: Trevor Herntier Koffi Eddy Ihou Anthony Smith Anand Rangarajan Adrian Peter

We consider the problem of model selection using the Minimum Description Length (MDL) criterion for distributions with parameters on the hypersphere. Model selection algorithms aim to find a compromise between goodness of fit and model complexity. Variables often considered for complexity penalties involve number of parameters, sample size and shape of the parameter space, with the penalty term often referred to as stochastic complexity. Current model selection criteria either ignore the shape of the parameter space or incorrectly penalize the complexity of the model, largely because typical Laplace approximation techniques yield inaccurate results for curved spaces. We demonstrate how the use of a constrained Laplace approximation on the hypersphere yields a novel complexity measure that more accurately reflects the geometry of these spherical parameters spaces. We refer to this modified model selection criterion as spherical MDL. As proof of concept, spherical MDL is used for bin selection in histogram density estimation, performing favorably against other model selection criteria.

]]>Entropy doi: 10.3390/e20080574

Authors: Eun-jin Kim

Stochastic processes are ubiquitous in nature and laboratories, and play a major role across traditional disciplinary boundaries. These stochastic processes are described by different variables and are thus very system-specific. In order to elucidate underlying principles governing different phenomena, it is extremely valuable to utilise a mathematical tool that is not specific to a particular system. We provide such a tool based on information geometry by quantifying the similarity and disparity between Probability Density Functions (PDFs) by a metric such that the distance between two PDFs increases with the disparity between them. Specifically, we invoke the information length L(t) to quantify information change associated with a time-dependent PDF that depends on time. L(t) is uniquely defined as a function of time for a given initial condition. We demonstrate the utility of L(t) in understanding information change and attractor structure in classical and quantum systems.

]]>Entropy doi: 10.3390/e20080573

Authors: Rodrigo Cofré Cesar Maldonado Fernando Rosas

We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

]]>Entropy doi: 10.3390/e20080572

Authors: Pier Giovanni Bissiri Stephen G. Walker

The current definition of a conditional probability enables one to update probabilities only on the basis of stochastic information. This paper provides a definition for conditional probability with non-stochastic information. The definition is derived by a set of axioms, where the information is connected to the outcome of interest via a loss function. An illustration is presented.

]]>Entropy doi: 10.3390/e20080571

Authors: Melisa B. Maidana Capitán Emilio Kropff Inés Samengo

In the study of the neural code, information-theoretical methods have the advantage of making no assumptions about the probabilistic mapping between stimuli and responses. In the sensory domain, several methods have been developed to quantify the amount of information encoded in neural activity, without necessarily identifying the specific stimulus or response features that instantiate the code. As a proof of concept, here we extend those methods to the encoding of kinematic information in a navigating rodent. We estimate the information encoded in two well-characterized codes, mediated by the firing rate of neurons, and by the phase-of-firing with respect to the theta-filtered local field potential. In addition, we also consider a novel code, mediated by the delta-filtered local field potential. We find that all three codes transmit significant amounts of kinematic information, and informative neurons tend to employ a combination of codes. Cells tend to encode conjunctions of kinematic features, so that most of the informative neurons fall outside the traditional cell types employed to classify spatially-selective units. We conclude that a broad perspective on the candidate stimulus and response features expands the repertoire of strategies with which kinematic information is encoded.

]]>Entropy doi: 10.3390/e20080565

Authors: Takeshi Ooshida Susumu Goto Michio Otsuki

Subdiffusion is commonly observed in liquids with high density or in restricted geometries, as the particles are constantly pushed back by their neighbors. Since this &ldquo;cage effect&rdquo; emerges from many-body dynamics involving spatiotemporally correlated motions, the slow diffusion should be understood not simply as a one-body problem but as a part of collective dynamics, described in terms of space&ndash;time correlations. Such collective dynamics are illustrated here by calculations of the two-particle displacement correlation in a system of repulsive Brownian particles confined in a (quasi-)one-dimensional channel, whose subdiffusive behavior is known as the single-file diffusion (SFD). The analytical calculation is formulated in terms of the Lagrangian correlation of density fluctuations. In addition, numerical solutions to the Langevin equation with large but finite interaction potential are studied to clarify the effect of overtaking. In the limiting case of the ideal SFD without overtaking, correlated motion with a diffusively growing length scale is observed. By allowing the particles to overtake each other, the short-range correlation is destroyed, but the long-range weak correlation remains almost intact. These results describe nested space&ndash;time structure of cages, whereby smaller cages are enclosed in larger cages with longer lifetimes.

]]>Entropy doi: 10.3390/e20080570

Authors: Dragutin T. Mihailović Miloud Bessafi Sara Marković Ilija Arsenić Slavica Malinović-Milićević Patrick Jeanty Mathieu Delsaut Jean-Pierre Chabriat Nusret Drešković Anja Mihailović

Analysis of daily solar irradiation variability and predictability in space and time is important for energy resources planning, development, and management. The natural variability of solar irradiation is being complicated by atmospheric conditions (in particular cloudiness) and orography, which introduce additional complexity into the phenomenological records. To address this question for daily solar irradiation data recorded during the years 2013, 2014 and 2015 at 11 stations measuring solar irradiance on La Reunion French tropical Indian Ocean Island, we use a set of novel quantitative tools: Kolmogorov complexity (KC) with its derivative associated measures and Hamming distance (HAM) and their combination to assess complexity and corresponding predictability. We find that all half-day (from sunrise to sunset) solar irradiation series exhibit high complexity. However, all of them can be classified into three groups strongly influenced by trade winds that circulate in a &ldquo;flow around&rdquo; regime: the windward side (trade winds slow down), the leeward side (diurnal thermally-induced circulations dominate) and the coast parallel to trade winds (winds are accelerated due to Venturi effect). We introduce Kolmogorov time (KT) that quantifies the time span beyond which randomness significantly influences predictability.

]]>Entropy doi: 10.3390/e20080569

Authors: Peng Wang Ge Li Yong Peng Rusheng Ju

Parameter estimation is one of the key technologies for system identification. The Bayesian parameter estimation algorithms are very important for identifying stochastic systems. In this paper, a random finite set based algorithm is proposed to overcome the disadvantages of the existing Bayesian parameter estimation algorithms. It can estimate the unknown parameters of the stochastic system which consists of a varying number of constituent elements by using the measurements disturbed by false detections, missed detections and noises. The models used for parameter estimation are constructed by using random finite set. Based on the proposed system model and measurement model, the key principles and formula derivation of the proposed algorithm are detailed. Then, the implementation of the algorithm is presented by using sequential Monte Carlo based Probability Hypothesis Density (PHD) filter and simulated tempering based importance sampling. Finally, the experiments of systematic errors estimation of multiple sensors are provided to prove the main advantages of the proposed algorithm. The sensitivity analysis is carried out to further study the mechanism of the algorithm. The experimental results verify the superiority of the proposed algorithm.

]]>Entropy doi: 10.3390/e20080568

Authors: Raúl Alcaraz

This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers.

]]>Entropy doi: 10.3390/e20080567

Authors: Mojtaba Ghadimi Michael J. W. Hall Howard M. Wiseman

&ldquo;Locality&rdquo; is a fraught word, even within the restricted context of Bell&rsquo;s theorem. As one of us has argued elsewhere, that is partly because Bell himself used the word with different meanings at different stages in his career. The original, weaker, meaning for locality was in his 1964 theorem: that the choice of setting by one party could never affect the outcome of a measurement performed by a distant second party. The epitome of a quantum theory violating this weak notion of locality (and hence exhibiting a strong form of nonlocality) is Bohmian mechanics. Recently, a new approach to quantum mechanics, inspired by Bohmian mechanics, has been proposed: Many Interacting Worlds. While it is conceptually clear how the interaction between worlds can enable this strong nonlocality, technical problems in the theory have thus far prevented a proof by simulation. Here we report significant progress in tackling one of the most basic difficulties that needs to be overcome: correctly modelling wavefunctions with nodes.

]]>Entropy doi: 10.3390/e20080566

Authors: Robert Flack Vincenzo Monachello Basil Hiley Peter Barker

A method for measuring the weak value of spin for atoms is proposed using a variant of the original Stern&ndash;Gerlach apparatus. A full simulation of an experiment for observing the real part of the weak value using the impulsive approximation has been carried out. Our predictions show a displacement of the beam of helium atoms in the metastable 23S1 state, &Delta;w, that is within the resolution of conventional microchannel plate detectors indicating that this type of experiment is feasible. Our analysis also determines the experimental parameters that will give an accurate determination of the weak value of spin. Preliminary experimental results are shown for helium, neon and argon in the 23S1 and 3P2 metastable states, respectively.

]]>Entropy doi: 10.3390/e20080564

Authors: Jesus Munoz-Pacheco Ernesto Zambrano-Serrano Christos Volos Sajad Jafari Jacques Kengne Karthikeyan Rajagopal

In this work, a new fractional-order chaotic system with a single parameter and four nonlinearities is introduced. One striking feature is that by varying the system parameter, the fractional-order system generates several complex dynamics: self-excited attractors, hidden attractors, and the coexistence of hidden attractors. In the family of self-excited chaotic attractors, the system has four spiral-saddle-type equilibrium points, or two nonhyperbolic equilibria. Besides, for a certain value of the parameter, a fractional-order no-equilibrium system is obtained. This no-equilibrium system presents a hidden chaotic attractor with a `hurricane’-like shape in the phase space. Multistability is also observed, since a hidden chaotic attractor coexists with a periodic one. The chaos generation in the new fractional-order system is demonstrated by the Lyapunov exponents method and equilibrium stability. Moreover, the complexity of the self-excited and hidden chaotic attractors is analyzed by computing their spectral entropy and Brownian-like motions. Finally, a pseudo-random number generator is designed using the hidden dynamics.

]]>Entropy doi: 10.3390/e20080563

Authors: Yuxing Li Yaan Li Xiao Chen Jing Yu Hong Yang Long Wang

Owing to the complexity of the ocean background noise, underwater acoustic signal denoising is one of the hotspot problems in the field of underwater acoustic signal processing. In this paper, we propose a new technique for underwater acoustic signal denoising based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), mutual information (MI), permutation entropy (PE), and wavelet threshold denoising. CEEMDAN is an improved algorithm of empirical mode decomposition (EMD) and ensemble EMD (EEMD). First, CEEMDAN is employed to decompose noisy signals into many intrinsic mode functions (IMFs). IMFs can be divided into three parts: noise IMFs, noise-dominant IMFs, and real IMFs. Then, the noise IMFs can be identified on the basis of MIs of adjacent IMFs; the other two parts of IMFs can be distinguished based on the values of PE. Finally, noise IMFs were removed, and wavelet threshold denoising is applied to noise-dominant IMFs; we can obtain the final denoised signal by combining real IMFs and denoised noise-dominant IMFs. Simulation experiments were conducted by using simulated data, chaotic signals, and real underwater acoustic signals; the proposed denoising technique performs better than other existing denoising techniques, which is beneficial to the feature extraction of underwater acoustic signal.

]]>Entropy doi: 10.3390/e20080562

Authors: Ho-Young Kwak

Heat transfer accompanying entropy generation for the evolving mini and microbubbles in solution is discussed based on the explicit solutions for the hydrodynamic equations related to the bubble motion. Even though the pressure difference between the gas inside the bubble and liquid outside the bubble is a major driving force for bubble evolution, the heat transfer by conduction at the bubble-liquid interface affects the delicate evolution of the bubble, especially for sonoluminescing the gas bubble in sulfuric acid solution. On the other hand, our explicit solutions for the continuity, Euler equation, and Newtonian gravitational equation reveal that supernovae evolve by the gravitational force radiating heat in space during the expanding or collapsing phase. In this article, how the entropy generation due to heat transfer affects the bubble motion delicately and how heat transfer is generated by gravitational energy and evolving speed for the supernovae will be discussed. The heat transfer experienced by the bubble and supernovae during their evolution produces a positive entropy generation rate.

]]>Entropy doi: 10.3390/e20080561

Authors: Nicholas V. Sarlis

By analyzing the seismicity in a new time domain, termed natural time, we recently found that the change of the entropy under time reversal (Physica A2018, 506, 625&ndash;634) and the relevant complexity measures (Entropy2018, 20, 477) exhibit pronounced variations before the occurrence of the M8.2 earthquake in Mexico on 7 September 2017. Here, the statistical significance of precursory phenomena associated with other physical properties and in particular the anomalous variations observed in the Earth&rsquo;s electric and magnetic fields before earthquakes in different regions of the world and in particular in Greece since 1980s and Japan during 2001&ndash;2010 are revisited (the latter, i.e., the magnetic field variations are alternatively termed ultra low frequency (ULF) seismo-magnetic phenomena). Along these lines we employ modern statistical tools like the event coincidence analysis and the receiver operating characteristics technique. We find that these precursory variations are far beyond chance and in addition their lead times fully agree with the experimental findings in Greece since the 1980s.

]]>Entropy doi: 10.3390/e20080560

Authors: Kevin R. Moon Kumar Sricharan Kristjan Greenewald Alfred O. Hero

Recent work has focused on the problem of nonparametric estimation of information divergence functionals between two continuous random variables. Many existing approaches require either restrictive assumptions about the density support set or difficult calculations at the support set boundary which must be known a priori. The mean squared error (MSE) convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets is derived where knowledge of the support boundary, and therefore, the boundary correction is not required. The theory of optimally weighted ensemble estimation is generalized to derive a divergence estimator that achieves the parametric rate when the densities are sufficiently smooth. Guidelines for the tuning parameter selection and the asymptotic distribution of this estimator are provided. Based on the theory, an empirical estimator of R&eacute;nyi-&alpha; divergence is proposed that greatly outperforms the standard kernel density plug-in estimator in terms of mean squared error, especially in high dimensions. The estimator is shown to be robust to the choice of tuning parameters. We show extensive simulation results that verify the theoretical results of our paper. Finally, we apply the proposed estimator to estimate the bounds on the Bayes error rate of a cell classification problem.

]]>Entropy doi: 10.3390/e20080559

Authors: Zhen Chen Yinkang Zhou Xiaobin Jin

The phenomenon of urban sprawl has received much attention. Accurately confirming the spatial expansion degree of urban sprawl (SEDUS) is a prerequisite to controlling urban sprawl. However, there is no reliable metric to accurately measure SEDUS. In this paper, based on binary entropy, we propose a new index named the spatial expansion degree index (SEDI), to overcome this difficulty. The study shows that the new index can accurately determine SEDUS and, compared with other commonly used measures, the new index has an obvious advantage in measuring SEDUS. The new index belongs to the second-order metrics of point pattern analysis, and greatly extends the concept of entropy. The new index can also be applied to other spatial differentiation research from a broader perspective. Although the new index is influenced by the scaling problem, because of small differences between different scales, given that the partition scheme in the research process is the same, the new index is a quite robust method for measuring SEDUS.

]]>Entropy doi: 10.3390/e20080558

Authors: Johann Summhammer Georg Sulyok Gustav Bernroider

We present a comparison of a classical and a quantum mechanical calculation of the motion of K+ ions in the highly conserved KcsA selectivity filter motive of voltage gated ion channels. We first show that the de Broglie wavelength of thermal ions is not much smaller than the periodic structure of Coulomb potentials in the nano-pore model of the selectivity filter. This implies that an ion may no longer be viewed to be at one exact position at a given time but can better be described by a quantum mechanical wave function. Based on first principle methods, we demonstrate solutions of a non-linear Schr&ouml;dinger model that provide insight into the role of short-lived (~1 ps) coherent ion transition states and attribute an important role to subsequent decoherence and the associated quantum to classical transition for permeating ions. It is found that short coherences are not just beneficial but also necessary to explain the fast-directed permeation of ions through the potential barriers of the filter. Certain aspects of quantum dynamics and non-local effects appear to be indispensable to resolve the discrepancy between potential barrier height, as reported from classical thermodynamics, and experimentally observed transition rates of ions through channel proteins.

]]>Entropy doi: 10.3390/e20080557

Authors: Oscar A. Negrete Francisco J. Peña Juan M. Florez Patricio Vargas

In this work, we report the magnetocaloric effect (MCE) in two systems of non-interactive particles: the first corresponds to the Landau problem case and the second the case of an electron in a quantum dot subjected to a parabolic confinement potential. In the first scenario, we realize that the effect is totally different from what happens when the degeneracy of a single electron confined in a magnetic field is not taken into account. In particular, when the degeneracy of the system is negligible, the magnetocaloric effect cools the system, while in the other case, when the degeneracy is strong, the system heats up. For the second case, we study the competition between the characteristic frequency of the potential trap and the cyclotron frequency to find the optimal region that maximizes the &Delta;T of the magnetocaloric effect, and due to the strong degeneracy of this problem, the results are in coherence with those obtained for the Landau problem. Finally, we consider the case of a transition from a normal MCE to an inverse one and back to normal as a function of temperature. This is due to the competition between the diamagnetic and paramagnetic response when the electron spin in the formulation is included.

]]>Entropy doi: 10.3390/e20080556

Authors: Shaobo He Chunbiao Li Kehui Sun Sajad Jafari

Designing a chaotic system with infinitely many attractors is a hot topic. In this paper, multiscale multivariate permutation entropy (MMPE) and multiscale multivariate Lempel&ndash;Ziv complexity (MMLZC) are employed to analyze the complexity of those self-reproducing chaotic systems with one-directional and two-directional infinitely many chaotic attractors. The analysis results show that complexity of this class of chaotic systems is determined by the initial conditions. Meanwhile, the values of MMPE are independent of the scale factor, which is different from the algorithm of MMLZC. The analysis proposed here is helpful as a reference for the application of the self-reproducing systems.

]]>Entropy doi: 10.3390/e20080555

Authors: Piotr Frąckiewicz

Games with unawareness model strategic situations in which players&rsquo; perceptions about the game are limited. They take into account the fact that the players may be unaware of some of the strategies available to them or their opponents as well as the players may have a restricted view about the number of players participating in the game. The aim of the paper is to introduce this notion into theory of quantum games. We focus on games in strategic form and Eisert&ndash;Wilkens&ndash;Lewenstein type quantum games. It is shown that limiting a player&rsquo;s perception in the game enriches the structure of the quantum game substantially and allows the players to obtain results that are unattainable when the game is played in a quantum way by means of previously used methods.

]]>Entropy doi: 10.3390/e20080554

Authors: Diego Alarcón P. Fernández de Córdoba J. M. Isidro Carlos Orea

A Toda&ndash;chain symmetry is shown to underlie the van der Waals gas and its close cousin, the ideal gas. Links to contact geometry are explored.

]]>Entropy doi: 10.3390/e20080553

Authors: Pierfrancesco Palazzo

The present research aimed at discussing the thermodynamic and informational aspects of entropy concept to propose a unitary perspective of its definitions as an inherent property of any system in any state. The dualism and the relation between physical nature of information and the informational content of physical states of matter and phenomena play a fundamental role in the description of multi-scale systems characterized by hierarchical configurations. A method is proposed to generalize thermodynamic and informational entropy property and characterize the hierarchical structure of its canonical definition at macroscopic and microscopic levels of a system described in the domain of classical and quantum physics. The conceptual schema is based on dualisms and symmetries inherent to the geometric and kinematic configurations and interactions occurring in many-particle and few-particle thermodynamic systems. The hierarchical configuration of particles and sub-particles, representing the constitutive elements of physical systems, breaks down into levels characterized by particle masses subdivision, implying positions and velocities degrees of freedom multiplication. This hierarchy accommodates the allocation of phenomena and processes from higher to lower levels in the respect of the equipartition theorem of energy. However, the opposite and reversible process, from lower to higher level, is impossible by virtue of the Second Law, expressed as impossibility of Perpetual Motion Machine of the Second Kind (PMM2) remaining valid at all hierarchical levels, and the non-existence of Maxwell&rsquo;s demon. Based on the generalized definition of entropy property, the hierarchical structure of entropy contribution and production balance, determined by degrees of freedom and constraints of systems configuration, is established. Moreover, as a consequence of the Second Law, the non-equipartition theorem of entropy is enunciated, which would be complementary to the equipartition theorem of energy derived from the First Law.

]]>Entropy doi: 10.3390/e20080552

Authors: Simon Saunders

The Gibbs Paradox is essentially a set of open questions as to how sameness of gases or fluids (or masses, more generally) are to be treated in thermodynamics and statistical mechanics. They have a variety of answers, some restricted to quantum theory (there is no classical solution), some to classical theory (the quantum case is different). The solution offered here applies to both in equal measure, and is based on the concept of particle indistinguishability (in the classical case, Gibbs&rsquo; notion of &lsquo;generic phase&rsquo;). Correctly understood, it is the elimination of sequence position as a labelling device, where sequences enter at the level of the tensor (or Cartesian) product of one-particle state spaces. In both cases it amounts to passing to the quotient space under permutations. &lsquo;Distinguishability&rsquo;, in the sense in which it is usually used in classical statistical mechanics, is a mathematically convenient, but physically muddled, fiction.

]]>Entropy doi: 10.3390/e20080551

Authors: Hector Zenil Narsis A. Kiani Jesper Tegnér

Information-theoretic-based measures have been useful in quantifying network complexity. Here we briefly survey and contrast (algorithmic) information-theoretic methods which have been used to characterize graphs and networks. We illustrate the strengths and limitations of Shannon&rsquo;s entropy, lossless compressibility and algorithmic complexity when used to identify aspects and properties of complex networks. We review the fragility of computable measures on the one hand and the invariant properties of algorithmic measures on the other demonstrating how current approaches to algorithmic complexity are misguided and suffer of similar limitations than traditional statistical approaches such as Shannon entropy. Finally, we review some current definitions of algorithmic complexity which are used in analyzing labelled and unlabelled graphs. This analysis opens up several new opportunities to advance beyond traditional measures.

]]>Entropy doi: 10.3390/e20080550

Authors: Rainer Hollerbach Donovan Dimanche Eun-jin Kim

We elucidate the effect of different deterministic nonlinear forces on geometric structure of stochastic processes by investigating the transient relaxation of initial PDFs of a stochastic variable x under forces proportional to -xn (n=3,5,7) and different strength D of &delta;-correlated stochastic noise. We identify the three main stages consisting of nondiffusive evolution, quasi-linear Gaussian evolution and settling into stationary PDFs. The strength of stochastic noise is shown to play a crucial role in determining these timescales as well as the peak amplitude and width of PDFs. From time-evolution of PDFs, we compute the rate of information change for a given initial PDF and uniquely determine the information length L(t) as a function of time that represents the number of different statistical states that a system evolves through in time. We identify a robust geodesic (where the information changes at a constant rate) in the initial stage, and map out geometric structure of an attractor as L(t&rarr;&infin;)&prop;&mu;m, where &mu; is the position of an initial Gaussian PDF. The scaling exponent m increases with n, and also varies with D (although to a lesser extent). Our results highlight ubiquitous power-laws and multi-scalings of information geometry due to nonlinear interaction.

]]>Entropy doi: 10.3390/e20080549

Authors: Imene Mitiche Gordon Morison Alan Nesbitt Brian G. Stewart Philip Boreham

This work exploits four entropy measures known as Sample, Permutation, Weighted Permutation, and Dispersion Entropy to extract relevant information from Electromagnetic Interference (EMI) discharge signals that are useful in fault diagnosis of High-Voltage (HV) equipment. Multi-class classification algorithms are used to classify or distinguish between various discharge sources such as Partial Discharges (PD), Exciter, Arcing, micro Sparking and Random Noise. The signals were measured and recorded on different sites followed by EMI expert&rsquo;s data analysis in order to identify and label the discharge source type contained within the signal. The classification was performed both within each site and across all sites. The system performs well for both cases with extremely high classification accuracy within site. This work demonstrates the ability to extract relevant entropy-based features from EMI discharge sources from time-resolved signals requiring minimal computation making the system ideal for a potential application to online condition monitoring based on EMI.

]]>Entropy doi: 10.3390/e20080548

Authors: Hailing Zhu Khmaies Ouahada

In this paper, we study the implications of using a form of network coding known as Random Linear Coding (RLC) for unicast communications from an economic perspective by investigating a simple scenario, in which several network nodes, the users, download files from the Internet via another network node, the sender, and the receivers as users pay a certain price to the sender for this service. The mean packet delay for a transmission scheme with RLC is analyzed and applied into an optimal pricing model to characterize the optimal admission rate, price and revenue. The simulation results show that RLC achieves better performance in terms of both mean packet delay and revenue compared to the basic retransmission scheme.

]]>Entropy doi: 10.3390/e20080547

Authors: Guiguo Feng Wangmei Guo Binyue Liu

Consider a network consisting of two independent single-antenna sources, a single-antenna destination and a helping multiple-antenna relay. This network is called a dual-hop multiple access relay network (MARN). In this network, sources transmit to the relay simultaneously in the first time slot. The relay retransmits the received sum-signal to the destination using a linear beamforming scheme in the second time slot. In this paper, we characterize the achievable rate region of MARN under linear beamforming. The achievable rate region characterization problem is first transformed to an equivalent &ldquo;corner point&rdquo; optimization problem with respect to linear beamforming matrix at the relay. Then, we present an efficient algorithm to solve it via only semi-definite programming (SDP). We further derive the mathematical close-forms of the maximum individual rates and the sum-rate. Finally, numerical results demonstrate the performance of the proposed schemes.

]]>Entropy doi: 10.3390/e20080546

Authors: Waleed Shahjehan Syed Waqar Shah Jaime Lloret Ignacio Bosch

Aiming at the limitations of the existing Limited Feedback Interference Alignment algorithms, this paper proposes a direct codeword selection scheme that maximizes the lower-bound of the user rate and reduces the sum rate loss by integrating the Bit Allocation algorithm. The target signal is decoded using the maximum signal to interference plus noise ratio (MAX-SINR) algorithm. Moreover, low complexity and global searching mechanisms are deployed to select the optimized codewords from the generated sets of codewords that approach the ideal precoder. Simulation results show that the proposed algorithm effectively improves the rate lower-bound of the system user as compared with the existing state-of-the-art algorithms.

]]>Entropy doi: 10.3390/e20080545

Authors: Vassilios Gavriil Margarita Chatzichristidi Zoe Kollia Alkiviadis-Constantinos Cefalas Nikolaos Spyropoulos-Antonakakis Vadim V. Semashko Evangelia Sarantopoulou

In thin polymeric layers, external molecular analytes may well be confined within tiny surface nano/microcavities, or they may be attached to ligand adhesion binding sites via electrical dipole forces. Even though molecular trapping is followed by a variation of the entropic potential, the experimental evidence of entropic energy variation from molecular confinement is scarce because tiny thermodynamic energy density diverseness can be tracked only by sub-nm surface strain. Here, it is shown that water confinement within photon-induced nanocavities in Poly (2-hydroxyethyl methacrylate), (PHEMA) layers could be trailed by an entropic potential variation that competes with a thermodynamic potential from electric dipole attachment of molecular adsorbates in polymeric ligands. The nano/microcavities and the ligands were fabricated on a PHEMA matrix by vacuum ultraviolet laser photons at 157 nm. The entropic energy variation during confinement of water analytes on the photon processed PHEMA layer was monitored via sub-nm surface strain by applying white light reflectance spectroscopy, nanoindentation, contact angle measurements, Atomic Force Microscopy (AFM) imaging, and surface and fractal analysis. The methodology has the potency to identify entropic energy density variations less than 1 pJm&minus;3 and to monitor dipole and entropic fields on biosurfaces.

]]>Entropy doi: 10.3390/e20070544

Authors: Andrés Santos Franz Saija Paolo V. Giaquinta

The residual multiparticle entropy (RMPE) of a fluid is defined as the difference, &Delta;s, between the excess entropy per particle (relative to an ideal gas with the same temperature and density), sex, and the pair-correlation contribution, s2. Thus, the RMPE represents the net contribution to sex due to spatial correlations involving three, four, or more particles. A heuristic &ldquo;ordering&rdquo; criterion identifies the vanishing of the RMPE as an underlying signature of an impending structural or thermodynamic transition of the system from a less ordered to a more spatially organized condition (freezing is a typical example). Regardless of this, the knowledge of the RMPE is important to assess the impact of non-pair multiparticle correlations on the entropy of the fluid. Recently, an accurate and simple proposal for the thermodynamic and structural properties of a hard-sphere fluid in fractional dimension 1&lt;d&lt;3 has been proposed (Santos, A.; L&oacute;pez de Haro, M. Phys. Rev. E 2016, 93, 062126). The aim of this work is to use this approach to evaluate the RMPE as a function of both d and the packing fraction ϕ. It is observed that, for any given dimensionality d, the RMPE takes negative values for small densities, reaches a negative minimum &Delta;smin at a packing fraction ϕmin, and then rapidly increases, becoming positive beyond a certain packing fraction ϕ0. Interestingly, while both ϕmin and ϕ0 monotonically decrease as dimensionality increases, the value of &Delta;smin exhibits a nonmonotonic behavior, reaching an absolute minimum at a fractional dimensionality d≃2.38. A plot of the scaled RMPE &Delta;s/|&Delta;smin| shows a quasiuniversal behavior in the region &minus;0.14≲ϕ&minus;ϕ0≲0.02.

]]>Entropy doi: 10.3390/e20070543

Authors: Yimin Huang Qiuxiang Li

Considering consumers&rsquo; attitudes to risks for probabilistic products and probabilistic selling, this paper develops a dynamic Stackelberg game model of the supply chain considering the asymmetric dual-channel structure. Based on entropy theory and dynamic theory, we analyze and simulate the influences of decision variables and parameters on the stability and entropy of asymmetric dual-channel supply chain systems using bifurcation, entropy diagram, the parameter plot basin, attractor, etc. The results show that decision variables and parameters have great impacts on the stability of asymmetric dual-channel supply chains; the supply chain system will enter chaos through flip bifurcation or Neimark&ndash;Sacker bifurcation with the increase of the system entropy, and thus the system is more complex and falls into a chaotic state, with its entropy increased. The stability of the system can become robust with the increase of the probability that product a becomes a probabilistic product, and it weakens with the increase of the risk preference of customers for probabilistic products and the relative bargaining power of the retailer. A manufacturer using the direct selling channel may obtain greater profit than one using traditional selling channels. Using the method of parameter adjustment and feedback control, the entropy of the supply chain system will decline, and the supply chain system will fall into a stable state. Therefore, in the actual market of probabilistic selling, the manufacturers and retailers should pay attention to the parameters and adjustment speed of prices and ensure the stability of the game process and the orderliness of the dual-channel supply chain.

]]>Entropy doi: 10.3390/e20070542

Authors: Tian Zhao Yu-Chao Hua Zeng-Yuan Guo

The principle of least action, which is usually applied to natural phenomena, can also be used in optimization problems with manual intervention. Following a brief introduction to the brachistochrone problem in classical mechanics, the principle of least action was applied to the optimization of reversible thermodynamic processes and cycles in this study. Analyses indicated that the entropy variation per unit of heat exchanged is the mode of action for reversible heat absorption or heat release processes. Minimizing this action led to the optimization of heat absorption or heat release processes, and the corresponding optimal path was the first or second half of a Carnot cycle. Finally, the action of an entire reversible thermodynamic cycle was determined as the sum of the actions of the heat absorption and release processes. Minimizing this action led to a Carnot cycle. This implies that the Carnot cycle can also be derived using the principle of least action derived from the entropy concept.

]]>Entropy doi: 10.3390/e20070541

Authors: Venkata Krishna Brahmam Kota Narendra D. Chavda

Embedded ensembles or random matrix ensembles generated by k-body interactions acting in many-particle spaces are now well established to be paradigmatic models for many-body chaos and thermalization in isolated finite quantum (fermion or boson) systems. In this article, briefly discussed are (i) various embedded ensembles with Lie algebraic symmetries for fermion and boson systems and their extensions (for Majorana fermions, with point group symmetries etc.); (ii) results generated by these ensembles for various aspects of chaos, thermalization and statistical relaxation, including the role of q-hermite polynomials in k-body ensembles; and (iii) analyses of numerical and experimental data for level fluctuations for trapped boson systems and results for statistical relaxation and decoherence in these systems with close relations to results from embedded ensembles.

]]>Entropy doi: 10.3390/e20070540

Authors: Subhashis Hazarika Ayan Biswas Soumya Dutta Han-Wei Shen

Uncertainty of scalar values in an ensemble dataset is often represented by the collection of their corresponding isocontours. Various techniques such as contour-boxplot, contour variability plot, glyphs and probabilistic marching-cubes have been proposed to analyze and visualize ensemble isocontours. All these techniques assume that a scalar value of interest is already known to the user. Not much work has been done in guiding users to select the scalar values for such uncertainty analysis. Moreover, analyzing and visualizing a large collection of ensemble isocontours for a selected scalar value has its own challenges. Interpreting the visualizations of such large collections of isocontours is also a difficult task. In this work, we propose a new information-theoretic approach towards addressing these issues. Using specific information measures that estimate the predictability and surprise of specific scalar values, we evaluate the overall uncertainty associated with all the scalar values in an ensemble system. This helps the scientist to understand the effects of uncertainty on different data features. To understand in finer details the contribution of individual members towards the uncertainty of the ensemble isocontours of a selected scalar value, we propose a conditional entropy based algorithm to quantify the individual contributions. This can help simplify analysis and visualization for systems with more members by identifying the members contributing the most towards overall uncertainty. We demonstrate the efficacy of our method by applying it on real-world datasets from material sciences, weather forecasting and ocean simulation experiments.

]]>Entropy doi: 10.3390/e20070539

Authors: Gregor Chliamovitch Yann Thorimbert

In two recent papers we introduced a generalization of Boltzmann&rsquo;s assumption of molecular chaos based on a criterion of maximum entropy, which allowed setting up a bilocal version of Boltzmann&rsquo;s kinetic equation. The present paper aims to investigate how the essentially non-local character of turbulent flows can be addressed through this bilocal kinetic description, instead of the more standard approach through the local Euler/Navier&ndash;Stokes equation. Balance equations appropriate to this kinetic scheme are derived and closed so as to provide bilocal hydrodynamical equations at the non-viscous order. These equations essentially consist of two copies of the usual local equations, but coupled through a bilocal pressure tensor. Interestingly, our formalism automatically produces a closed transport equation for this coupling term.

]]>Entropy doi: 10.3390/e20070538

Authors: Lev Vaidman Izumi Tsutsui

The history of photons in a nested Mach&ndash;Zehnder interferometer with an inserted Dove prism is analyzed. It is argued that the Dove prism does not change the past of the photon. Alonso and Jordan correctly point out that an experiment by Danan et al. demonstrating the past of the photon in a nested interferometer will show different results when the Dove prism is inserted. The reason, however, is not that the past is changed, but that the experimental demonstration becomes incorrect. The explanation of a signal from the place in which the photon was (almost) not present is given. Bohmian trajectory of the photon is specified.

]]>Entropy doi: 10.3390/e20070537

Authors: Dalei Jing Lei He

The present work numerically studies the thermal characteristics of a staggered double-layer microchannel heat sink (DLMCHS) with an offset between the upper layer of microchannels and lower layer of microchannels in the width direction, and investigates effects of inlet velocity and geometric parameters including the offset of the two layers of microchannels, vertical rib thickness and microchannel aspect ratio on the thermal resistance of the staggered DLMCHS. The present work found that the thermal resistance of the staggered DLMCHS increases with the increasing offset value when the vertical rib thickness is small, but decreases firstly and then increases as the offset value increases when the vertical rib thickness is large enough. Furthermore, the thermal resistance of the staggered DLMCHS decreases with the increasing offset when the aspect ratio is small, but increases with the increasing offset when the aspect ratio is large enough. Thus, for the DLMCHS with a small microchannel aspect ratio and large vertical rib thickness, the offset between the upper layer of microchannels and the lower layer of microchannels in the width direction is a potential method to reduce thermal resistance and improve the thermal performance of the DLMCHS.

]]>Entropy doi: 10.3390/e20070536

Authors: Michel Boyer Rotem Liss Tal Mor

A semiquantum key distribution (SQKD) protocol makes it possible for a quantum party and a classical party to generate a secret shared key. However, many existing SQKD protocols are not experimentally feasible in a secure way using current technology. An experimentally feasible SQKD protocol, “classical Alice with a controllable mirror” (the “Mirror protocol”), has recently been presented and proved completely robust, but it is more complicated than other SQKD protocols. Here we prove a simpler variant of the Mirror protocol (the “simplified Mirror protocol”) to be completely non-robust by presenting two possible attacks against it. Our results show that the complexity of the Mirror protocol is at least partly necessary for achieving robustness.

]]>Entropy doi: 10.3390/e20070535

Authors: Linqing Huang Shuting Cai Mingqing Xiao Xiaoming Xiong

Recently, to conquer most non-plain related chaos-based image cryptosystems&rsquo; security flaws that cannot resist the powerful chosen/knownn plain-text attacks or differential attacks efficiently for less plaintext sensitivity, many plain related chaos-based image cryptosystems have been developed. Most cryptosystems that have adopted the traditional permutation&ndash;diffusion structure still have some drawbacks and security flaws: (1) most plaintext related image encryption schemes using only plaintext related confusion operation or only plaintext related diffusion operation relate to plaintext inadequately that cannot achieve high plaintext sensitivity; (2) in some algorithms, the generation of security key that needs to be sent to the receiver is determined by the original image, so these algorithms may not applicable to real-time image encryption; (3) most plaintext related image encryption schemes have less efficiency because more than one round permutation&ndash;diffusion operation is required to achieve high security. To obtain high security and efficiency, a simple chaotic based color image encryption system by using both plaintext related permutation and diffusion is presented in this paper. In our cryptosystem, the values of the parameters of cat map used in permutation stage are related to plain image and the parameters of cat map are also influenced by the diffusion operation. Thus, both the permutation stage and diffusion stage are related to plain images, which can obtain high key sensitivity and plaintext sensitivity to resist chosen/known plaintext attacks or differential attacks efficiently. Furthermore, only one round of plaintext related permutation and diffusion operation is performed to process the original image to obtain cipher image. Thus, the proposed scheme has high efficiency. Complete simulations are given and the simulation results prove the excellent security and efficiency of the proposed scheme.

]]>Entropy doi: 10.3390/e20070534

Authors: Hector Zenil Narsis Kiani Jesper Tegnér

We introduce a definition of algorithmic symmetry in the context of geometric and spatial complexity able to capture mathematical aspects of different objects using as a case study polyominoes and polyhedral graphs. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov–Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumerate all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity—both theoretical and numerical—with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize spatial, geometric, symmetric and topological properties of mathematical objects and graphs.

]]>Entropy doi: 10.3390/e20070533

Authors: George Mikhailovsky

This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the chemical evolution-diversification; and during the biological and human evolution&mdash;a process of distinctifying. The main event in the evolution of the Universe was the emergence of new levels of hierarchy, which together constitute the process of hierarchogenesis. This process contains 14 such events so far, and its dynamics is presented graphically by a very regular and smooth curve. The function that the curve presents is odd, i.e., symmetric about its central part, due to the similarity of patterns of the deceleration during the cosmic/chemical evolution (1st half of the general evolution) and the acceleration during the biological/human evolution (its 2nd half). The main driver of the hierarchogenesis as described by this odd function is counteraction and counterbalance of attraction and repulsion that take various forms at the different hierarchical levels. Direction and pace of the irreversible and inevitable increase of the Universe complexity in accordance with the general law of complification result from a consistent influence of all these factors.

]]>Entropy doi: 10.3390/e20070532

Authors: Leslie Glasser H. Donald Brooke Jenkins

n/a

]]>Entropy doi: 10.3390/e20070531

Authors: Karmele Lopez-de-Ipina Jordi Solé-Casals Marcos Faúndez-Zanuy Pilar M. Calvo Enric Sesa Josep Roure Unai Martinez-de-Lizarduy Blanca Beitia Elsa Fernández Jon Iradi Joseba Garcia-Melero Alberto Bergareche

Among neural disorders related to movement, essential tremor has the highest prevalence; in fact, it is twenty times more common than Parkinson&rsquo;s disease. The drawing of the Archimedes&rsquo; spiral is the gold standard test to distinguish between both pathologies. The aim of this paper is to select non-linear biomarkers based on the analysis of digital drawings. It belongs to a larger cross study for early diagnosis of essential tremor that also includes genetic information. The proposed automatic analysis system consists in a hybrid solution: Machine Learning paradigms and automatic selection of features based on statistical tests using medical criteria. Moreover, the selected biomarkers comprise not only commonly used linear features (static and dynamic), but also other non-linear ones: Shannon entropy and Fractal Dimension. The results are hopeful, and the developed tool can easily be adapted to users; and taking into account social and economic points of view, it could be very helpful in real complex environments.

]]>Entropy doi: 10.3390/e20070530

Authors: Amina-Aicha Khennaoui Adel Ouannas Samir Bendoukha Xiong Wang Viet-Thanh Pham

In this paper, we propose a fractional map based on the integer-order unified map. The chaotic behavior of the proposed map is analyzed by means of bifurcations plots, and experimental bounds are placed on the parameters and fractional order. Different control laws are proposed to force the states to zero asymptotically and to achieve the complete synchronization of a pair of fractional unified maps with identical or nonidentical parameters. Numerical results are used throughout the paper to illustrate the findings.

]]>Entropy doi: 10.3390/e20070529

Authors: Simona Decu Stefan Haesen Leopold Verstraelen Gabriel-Eduard Vîlcu

In this article, we consider statistical submanifolds of Kenmotsu statistical manifolds of constant ϕ-sectional curvature. For such submanifold, we investigate curvature properties. We establish some inequalities involving the normalized &delta;-Casorati curvatures (extrinsic invariants) and the scalar curvature (intrinsic invariant). Moreover, we prove that the equality cases of the inequalities hold if and only if the imbedding curvature tensors h and h&lowast; of the submanifold (associated with the dual connections) satisfy h=&minus;h&lowast;, i.e., the submanifold is totally geodesic with respect to the Levi&ndash;Civita connection.

]]>Entropy doi: 10.3390/e20070528

Authors: Wiktor Jakowluk

In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra state variable representing the free final time scaling factor, formulated in the Bolza functional form, subject to the D-efficiency constraint as well as the input energy constraint. The objective function used for the model of the system identification provides robustness regarding the outlying data and was constructed using the so-called Entropy-like estimator. The perturbation time interval has a significant impact on the cost of the real-life system identification experiment. The contribution of this work is to examine the economic aspects between the imposed constraints on the input signal design, and the experiment duration while undertaking an identification experiment in the real operating conditions. The methodology is applicable to the general class of systems and was supported by numerical examples. Illustrative examples of the Least Squares, and the Entropy-Like estimators for the system parameter data validation where measurements include additive white noise are compared using ellipsoidal confidence regions.

]]>Entropy doi: 10.3390/e20070527

Authors: Romain Brasselet Angelo Arleo

Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any element belongs to a category. Interestingly, this categorization leads to an altered perception referred to as categorical perception: for a given physical distance, items within a category are perceived closer than items in two different categories. A subtler effect is the perceptual magnet: discriminability is reduced close to the prototypes of a category and increased near its boundaries. Here, starting from predefined abstract categories, we naturally derive the internal structure of categories and the phenomenon of categorical perception, using an information theoretical framework that involves both probabilities and pairwise similarities between items. Essentially, we suggest that pairwise similarities between items are to be tuned to render some predefined categories as well as possible. However, constraints on these pairwise similarities only produce an approximate matching, which explains concurrently the notion of goodness and the warping of perception. Overall, we demonstrate that similarity-based information theory may offer a global and unified principled understanding of categorization and categorical perception simultaneously.

]]>Entropy doi: 10.3390/e20070526

Authors: Kevin Brown Paul Allopenna William Hunt Rachael Steiner Elliot Saltzman Ken McRae James Magnuson

Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two forms [words] that differ by one phoneme) compete significantly for recognition as a spoken word is heard. This definition of phonological similarity can be extended to an entire corpus of forms to produce a phonological neighbor network (PNN). We study PNNs for five languages: English, Spanish, French, Dutch, and German. Consistent with previous work, we find that the PNNs share a consistent set of topological features. Using an approach that generates random lexicons with increasing levels of phonological realism, we show that even random forms with minimal relationship to any real language, combined with only the empirical distribution of language-specific phonological form lengths, are sufficient to produce the topological properties observed in the real language PNNs. The resulting pseudo-PNNs are insensitive to the level of lingustic realism in the random lexicons but quite sensitive to the shape of the form length distribution. We therefore conclude that “universal” features seen across multiple languages are really string universals, not language universals, and arise primarily due to limitations in the kinds of networks generated by the one-step neighbor definition. Taken together, our results indicate that caution is warranted when linking the dynamics of human spoken word recognition to the topological properties of PNNs, and that the investigation of alternative similarity metrics for phonological forms should be a priority.

]]>Entropy doi: 10.3390/e20070525

Authors: Eesa Al Solami Musheer Ahmad Christos Volos Mohammad Najam Doja Mirza Mohd Sufyan Beg

In this paper, we present a novel method to construct cryptographically strong bijective substitution-boxes based on the complicated dynamics of a new hyperchaotic system. The new hyperchaotic system was found to have good characteristics when compared with other systems utilized for S-box construction. The performance assessment of the proposed S-box method was carried out based on criteria, such as high nonlinearity, a good avalanche effect, bit-independent criteria, and low differential uniformity. The proposed method was also analyzed for the batch-generation of 8 &times; 8 S-boxes. The analyses found that through a proposed purely chaos-based method, an 8 &times; 8 S-box with a maximum average high nonlinearity of 108.5, or S-boxes with differential uniformity as low as 8, can be retrieved. Moreover, small-sized S-boxes with high nonlinearity and low differential uniformity are also obtainable. A performance comparison of the anticipated method with recent S-box proposals proved its dominance and effectiveness for a strong bijective S-box construction.

]]>Entropy doi: 10.3390/e20070524

Authors: Yizhak Marcus

The standard entropies S298&deg;E of deep eutectic solvents (DESs), which are liquid binary mixtures of a hydrogen bond acceptor component and a hydrogen bod donor one, are calculated from their molecular volumes, derived from their densities or crystal structures. These values are compared with those of the components&mdash;pro-rated according to the DES composition&mdash;to obtain the standard entropies of DES formation &Delta;fS. These quantities are positive, due to the increased number and kinds of hydrogen bonds present in the DESs relative to those in the components. The &Delta;fS values are also compared with the freezing point depressions of the DESs &Delta;fusT/K, but no general conclusions on their mutual relationship could be drawn.

]]>Entropy doi: 10.3390/e20070523

Authors: Jinshan Ma

A novel generalized grey target decision method for mixed attributes based on Kullback-Leibler (K-L) distance is proposed. The proposed approach involves the following steps: first, all indices are converted into index binary connection number vectors; second, the two-tuple (determinacy, uncertainty) numbers originated from index binary connection number vectors are obtained; third, the positive and negative target centers of two-tuple (determinacy, uncertainty) numbers are calculated; then the K-L distances of all alternatives to their positive and negative target centers are integrated by the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method; the final decision is based on the integrated value on a bigger the better basis. A case study exemplifies the proposed approach.

]]>Entropy doi: 10.3390/e20070522

Authors: Yuanyuan Li Yanjing Sun Xinhua Huang Guanqiu Qi Mingyao Zheng Zhiqin Zhu

Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, etc. Traditional multi-scale transform (MST) based image fusion solutions have difficulties in the selection of decomposition level, and the contrast loss in fused image. At the same time, traditional sparse-representation based image fusion methods suffer the weak representation ability of fixed dictionary. In order to overcome these deficiencies of MST- and SR-based methods, this paper proposes an image fusion framework which integrates nonsubsampled contour transformation (NSCT) into sparse representation (SR). In this fusion framework, NSCT is applied to source images decomposition for obtaining corresponding low- and high-pass coefficients. It fuses low- and high-pass coefficients by using SR and Sum Modified-laplacian (SML) respectively. NSCT inversely transforms the fused coefficients to obtain the final fused image. In this framework, a principal component analysis (PCA) is implemented in dictionary training to reduce the dimension of learned dictionary and computation costs. A novel high-pass fusion rule based on SML is applied to suppress pseudo-Gibbs phenomena around singularities of fused image. Compared to three mainstream image fusion solutions, the proposed solution achieves better performance on structural similarity and detail preservation in fused images.

]]>Entropy doi: 10.3390/e20070521

Authors: Wenan Cai Zhaojian Yang Zhijian Wang Yiliang Wang

Due to the weak entropy of the vibration signal in the strong noise environment, it is very difficult to extract compound fault features. EMD (Empirical Mode Decomposition), EEMD (Ensemble Empirical Mode Decomposition) and LMD (Local Mean Decomposition) are widely used in compound fault feature extraction. Although they can decompose different characteristic components into each IMF (Intrinsic Mode Function), there is still serious mode mixing because of the noise. VMD (Variational Mode Decomposition) is a rigorous mathematical theory that can alleviate the mode mixing. Each characteristic component of VMD contains a unique center frequency but it is a parametric decomposition method. An improper value of K will lead to over-decomposition or under-decomposition. So, the number of decomposition levels of VMD needs an adaptive determination. The commonly used adaptive methods are particle swarm optimization and ant colony algorithm but they consume a lot of computing time. This paper proposes a compound fault feature extraction method based on Multipoint Kurtosis (MKurt)-VMD. Firstly, MED (Minimum Entropy Deconvolution) denoises the vibration signal in the strong noise environment. Secondly, multipoint kurtosis extracts the periodic multiple faults and a multi-periodic vector is further constructed to determine the number of impulse periods which determine the K value of VMD. Thirdly, the noise-reduced signal is processed by VMD and the fault features are further determined by FFT. Finally, the proposed compound fault feature extraction method can alleviate the mode mixing in comparison with EEMD. The validity of this method is further confirmed by processing the measured signal and extracting the compound fault features such as the gear spalling and the roller fault, their fault periods are 22.4 and 111.2 respectively and the corresponding frequencies are 360 Hz and 72 Hz, respectively.

]]>Entropy doi: 10.3390/e20070520

Authors: Jan Smrek Kurt Kremer

Active matter consists of particles that dissipate energy, from their own sources, in the form of mechanical work on their surroundings. Recent interest in active-passive polymer mixtures has been driven by their relevance in phase separation of (e.g., transcriptionally) active and inactive (transcriptionally silent) DNA strands in nuclei of living cells. In this paper, we study the interfacial properties of the phase separated steady states of the active-passive polymer mixtures and compare them with equilibrium phase separation. We model the active constituents by assigning them stronger-than-thermal fluctuations. We demonstrate that the entropy production is an accurate indicator of the phase transition. We then construct phase diagrams and analyze kinetic properties of the particles as a function of the distance from the interface. Studying the interface fluctuations, we find that they follow the capillary waves spectrum. This allows us to establish a mechanistic definition of the interfacial stiffness and its dependence on the relative level of activity with respect to the passive constituents. We show how the interfacial width depends on the activity ratio and comment on the finite size effects. Our results highlight similarities and differences of the non-equilibrium steady states with an equilibrium phase separated polymer mixture with a lower critical solution temperature. We present several directions in which the non-equilibrium system can be studied further and point out interesting observations that indicate general principles behind the non-equilibrium phase separation.

]]>Entropy doi: 10.3390/e20070519

Authors: Li He Haifei Zhu Tao Zhang Honghong Yang Yisheng Guan

In kernel methods, Nystr&ouml;m approximation is a popular way of calculating out-of-sample extensions and can be further applied to large-scale data clustering and classification tasks. Given a new data point, Nystr&ouml;m employs its empirical affinity vector, k, for calculation. This vector is assumed to be a proper measurement of the similarity between the new point and the training set. In this paper, we suggest replacing the affinity vector by its projections on the leading eigenvectors learned from the training set, i.e., using k*=&sum;i=1ckTuiui instead, where ui is the i-th eigenvector of the training set and c is the number of eigenvectors used, which is typically equal to the number of classes designed by users. Our work is motivated by the constraints that in kernel space, the kernel-mapped new point should (a) also lie on the unit sphere defined by the Gaussian kernel and (b) generate training set affinity values close to k. These two constraints define a Quadratic Optimization Over a Sphere (QOOS) problem. In this paper, we prove that the projection on the leading eigenvectors, rather than the original affinity vector, is the solution to the QOOS problem. The experimental results show that the proposed replacement of k by k* slightly improves the performance of the Nystr&ouml;m approximation. Compared with other affinity matrix modification methods, our k* obtains comparable or higher clustering performance in terms of accuracy and Normalized Mutual Information (NMI).

]]>Entropy doi: 10.3390/e20070518

Authors: Heonsoo Lee Zirui Huang Xiaolin Liu UnCheol Lee Anthony G. Hudetz

Theoretical consideration predicts that the alteration of local and shared information in the brain is a key element in the mechanism of anesthetic-induced unconsciousness. Ordinal pattern analysis, such as permutation entropy (PE) and symbolic mutual information (SMI), have been successful in quantifying local and shared information in neurophysiological data; however, they have been rarely applied to altered states of consciousness, especially to data obtained with functional magnetic resonance imaging (fMRI). PE and SMI analysis, together with the superb spatial resolution of fMRI recording, enables us to explore the local information of specific brain areas, the shared information between the areas, and the relationship between the two. Given the spatially divergent action of anesthetics on regional brain activity, we hypothesized that anesthesia would differentially influence entropy (PE) and shared information (SMI) across various brain areas, which may represent fundamental, mechanistic indicators of loss of consciousness. FMRI data were collected from 15 healthy participants during four states: wakefulness (W), light (conscious) sedation (L), deep (unconscious) sedation (D), and recovery (R). Sedation was produced by the common, clinically used anesthetic, propofol. Firstly, we found that that global PE decreased from W to D, and increased from D to R. The PE was differentially affected across the brain areas; specifically, the PE in the subcortical network was reduced more than in the cortical networks. Secondly, SMI was also differentially affected in different areas, as revealed by the reconfiguration of its spatial pattern (topographic structure). The topographic structures of SMI in the conscious states W, L, and R were distinctively different from that of the unconscious state D. Thirdly, PE and SMI were positively correlated in W, L, and R, whereas this correlation was disrupted in D. And lastly, PE changes occurred preferentially in highly connected hub regions. These findings advance our understanding of brain dynamics and information exchange, emphasizing the importance of topographic structure and the relationship of local and shared information in anesthetic-induced unconsciousness.

]]>Entropy doi: 10.3390/e20070517

Authors: Tianchen Li Bin Liu Yong Liu Wenmin Guo Ao Fu Liangsheng Li Nie Yan Qihong Fang

A novel metal matrix composite based on the NbMoCrTiAl high entropy alloy (HEA) was designed by the in-situ formation method. The microstructure, phase evolution, and compression mechanical properties at room temperature of the composite are investigated in detail. The results confirmed that the composite was primarily composed of body-centered cubic solid solution with a small amount of titanium carbides and alumina. With the presence of approximately 7.0 vol. % Al2O3 and 32.2 vol. % TiC reinforced particles, the compressive fracture strength of the composite (1542 MPa) was increased by approximately 50% compared with that of the as-cast NbMoCrTiAl HEA. In consideration of the superior oxidation resistance, the P/M NbMoCrTiAl high entropy alloy composite could be considered as a promising high temperature structural material.

]]>Entropy doi: 10.3390/e20070516

Authors: Lukas Mairhofer Sandra Eibenberger Armin Shayeghi Markus Arndt

Matter-wave near-field interference can imprint a nano-scale fringe pattern onto a molecular beam, which allows observing its shifts in the presence of even very small external forces. Here we demonstrate quantum interference of the pre-vitamin 7-dehydrocholesterol and discuss the conceptual challenges of magnetic deflectometry in a near-field interferometer as a tool to explore photochemical processes within molecules whose center of mass is quantum delocalized.

]]>Entropy doi: 10.3390/e20070515

Authors: Edoardo Milotti Sergio Bartalucci Sergio Bertolucci Massimiliano Bazzi Mario Bragadireanu Michael Cargnelli Alberto Clozza Catalina Curceanu Luca De Paolis Jean-Pierre Egger Carlo Guaraldo Mihail Iliescu Matthias Laubenstein Johann Marton Marco Miliucci Andreas Pichler Dorel Pietreanu Kristian Piscicchia Alessandro Scordo Hexi Shi Diana Laura Sirghi Florin Sirghi Laura Sperandio Oton Vázquez Doce Eberhard Widmann Johann Zmeskal

The VIolation of Pauli (VIP) experiment (and its upgraded version, VIP-2) uses the Ramberg and Snow (RS) method (Phys. Lett. B 1990, 238, 438) to search for violations of the Pauli exclusion principle in the Gran Sasso underground laboratory. The RS method consists of feeding a copper conductor with a high direct current, so that the large number of newly-injected conduction electrons can interact with the copper atoms and possibly cascade electromagnetically to an already occupied atomic ground state if their wavefunction has the wrong symmetry with respect to the atomic electrons, emitting characteristic X-rays as they do so. In their original data analysis, RS considered a very simple path for each electron, which is sure to return a bound, albeit a very weak one, because it ignores the meandering random walks of the electrons as they move from the entrance to the exit of the copper sample. These complex walks bring the electrons close to many more atoms than in the RS calculation. Here, we consider the full description of these walks and show that this leads to a nontrivial and nonlinear X-ray emission rate. Finally, we obtain an improved bound, which sets much tighter constraints on the violation of the Pauli exclusion principle for electrons.

]]>Entropy doi: 10.3390/e20070514

Authors: Arieh Ben-Naim

It is well known that the statistical mechanical theory of liquids has been lagging far behind the theory of either gases or solids, See for examples: Ben-Naim (2006), Fisher (1964), Guggenheim (1952) Hansen and McDonald (1976), Hill (1956), Temperley, Rowlinson and Rushbrooke (1968), O’Connell (1971). Information theory was recently used to derive and interpret the entropy of an ideal gas of simple particles (i.e., non-interacting and structure-less particles). Starting with Shannon’s measure of information (SMI), one can derive the entropy function of an ideal gas, the same function as derived by Sackur (1911) and Tetrode (1912). The new deviation of the same entropy function, based on SMI, has several advantages, as listed in Ben-Naim (2008, 2017). Here we mention two: First, it provides a simple interpretation of the various terms in this entropy function. Second, and more important for our purpose, this derivation may be extended to any system of interacting particles including liquids and solutions. The main idea is that once one adds intermolecular interactions between the particles, one also adds correlations between the particles. These correlations may be cast in terms of mutual information (MI). Hence, we can start with the informational theoretical interpretation of the entropy of an ideal gas. Then, we add correction due to correlations in the form of MI between the locations of the particles. This process preserves the interpretation of the entropy of liquids and solutions in terms of a measure of information (or as an average uncertainty about the locations of the particles). It is well known that the entropy of liquids, any liquids for that matter, is lower than the entropy of a gas. Traditionally, this fact is interpreted in terms of order-disorder. The lower entropy of the liquid is interpreted in terms of higher degree of order compared with that of the gas. However, unlike the transition from a solid to either a liquid, or to a gaseous phase where the order-disorder interpretation works well, the same interpretation would not work for the liquid-gas transition. It is hard, if not impossible, to argue that the liquid phase is more “ordered” than the gaseous phase. In this article, we interpret the lower entropy of liquids in terms of SMI. One outstanding liquid known to be a structured liquid, is water, according to Ben-Naim (2009, 2011). In addition, heavy water, as well as aqueous solutions of simple solutes such as argon or methane, will be discussed in this article.

]]>