Entropy doi: 10.3390/e19100510

Authors: Jian Jiao Xindong Sui Shushi Gu Shaohua Wu Qinyu Zhang

The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN), which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI) from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP) is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput.

]]>Entropy doi: 10.3390/e19100508

Authors: Jiping Yang Yijun Feng Wanhua Qiu

Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E) measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10) stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10) stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

]]>Entropy doi: 10.3390/e19100476

Authors: Armando Fontalvo Jose Solano Cristian Pedraza Antonio Bula Arturo Gonzalez Quiroga Ricardo Vasquez Padilla

Low-grade heat sources such as solar thermal, geothermal, exhaust gases and industrial waste heat are suitable alternatives for power generation which can be exploited by means of small-scale Organic Rankine Cycle (ORC). This paper combines thermodynamic optimization and economic analysis to assess the performance of single and dual pressure ORC operating with different organic fluids and targeting small-scale applications. Maximum power output is lower than 45 KW while the temperature of the heat source varies in the range 100–200 °C. The studied working fluids, namely R1234yf, R1234ze(E) and R1234ze(Z), are selected based on environmental, safety and thermal performance criteria. Levelized Cost of Electricity (LCOE) and Specific Investment Cost (SIC) for two operation conditions are presented: maximum power output and maximum thermal efficiency. Results showed that R1234ze(Z) achieves the highest net power output (up to 44 kW) when net power output is optimized. Regenerative ORC achieves the highest performance when thermal efficiency is optimized (up to 18%). Simple ORC is the most cost-effective among the studied cycle configurations, requiring a selling price of energy of 0.3 USD/kWh to obtain a payback period of 8 years. According to SIC results, the working fluid R1234ze(Z) exhibits great potential for simple ORC when compared to conventional R245fa.

]]>Entropy doi: 10.3390/e19090507

Authors: Giancarlo Franzese Ivan Latella J. Rubi

n/a

]]>Entropy doi: 10.3390/e19090498

Authors: Samuel Adesanya Michael Fakoya

In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel walls. The upper wall of the channel is subjected to a constant heat flux while the lower wall is insulated. The equations governing the fluid flow are formulated, non-dimensionalized and solved by using the Adomian decomposition method. The Adomian series solutions for the velocity and temperature fields are then used to compute the entropy generation rate and inherent heat irreversibility in the flow domain. The effects of various fluid parameters are presented graphically and discussed extensively.

]]>Entropy doi: 10.3390/e19090505

Authors: Yan Feng Xue-Qin Jiang Jia Hou Hui-Ming Wang Yi Yang

The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted bits in the AD. The unidirectional SKA, whose transmission rate is 0 . 5, can be realized by using the original two-way wiretap channel as the AD phase. In this paper, we establish an efficient bidirectional SKA whose transmission rate is nearly 1 by modifying the two-way wiretap channel and using the modified two-way wiretap channel as the AD phase. The bidirectional SKA can be extended to multiple rounds of SKA with the same performance and transmission rate. For multiple rounds of bidirectional SKA, we have provided the bit error rate performance of the main channel and eavesdropper’s channel and the secret-key capacity. It is shown that the bit error rate (BER) of the main channel was lower than the eavesdropper’s channel and we prove that the transmission rate was nearly 1 when the number of rounds was large. Moreover, the secret-key capacity C s was from 0 . 04 to 0 . 1 as the error probability of channel was from 0 . 01 to 0 . 15 in binary symmetric channel (BSC). The secret-key capacity was close to 0 . 3 as the signal-to-noise ratio increased in the additive white Gaussian noise (AWGN) channel.

]]>Entropy doi: 10.3390/e19090504

Authors: Shuangshuang Lin Zhigang Liu Keting Hu

In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively indicate the presence of transients and locations in the frequency domain. Wavelet-packet energy Shannon entropy (WPESE) is appropriate for the transient changes detection of complex non-linear and non-stationary signals. Based on the analyses of currents in normal and fault conditions, SK based on CWD and WPESE are combined with the DC component method. SK based on CWD and WPESE are used for the fault detection, and the DC component method is used for the fault localization. This approach can diagnose the specific locations of faulty Insulated Gate Bipolar Transistors (IGBTs) with high accuracy, and it requires no additional devices. Experiments on the RT-LAB platform are carried out and the experimental results verify the feasibility and effectiveness of the diagnosis method.

]]>Entropy doi: 10.3390/e19090501

Authors: Liangjun Yu Liangxiao Jiang Dianhong Wang Lungan Zhang

Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. In previous studies, ODEs were exploited directly in a simple way. For example, averaged one-dependence estimators (AODE) weaken the attribute independence assumption by directly averaging all of a constrained class of classifiers. However, all one-dependence estimators in AODE have the same weights and are treated equally. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). AVWAODE assigns discriminative weights to different ODEs by computing the correlation between the different root attribute value and the class. Our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We experimentally tested them using a collection of 36 University of California at Irvine (UCI) datasets and found that they both achieved better performance than some other state-of-the-art Bayesian classifiers used for comparison.

]]>Entropy doi: 10.3390/e19090502

Authors: Gengxi Zhang Xiaoling Su Vijay P. Singh Olusola O. Ayantobo

Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled variables in the middle reaches of the Hei River basin. Based on the spatial distribution of mutual information, the whole study area was divided into five sub-regions. In each sub-region, nested statistical models were applied to model the NDVI on the grid and regional scales, respectively. Results showed that the annual average NDVI increased at a rate of 0.005/a over the past 11 years. In the desert regions, the NDVI increased significantly with an increase in precipitation and temperature, and a high accuracy of retrieving NDVI model was obtained by coupling precipitation and temperature, especially in sub-region I. In the oasis regions, groundwater was also an important factor driving vegetation growth, and the rise of the groundwater level contributed to the growth of vegetation. However, the relationship was weaker in artificial oasis regions (sub-region III and sub-region V) due to the influence of human activities such as irrigation. The overall correlation coefficient between the observed NDVI and modeled NDVI was observed to be 0.97. The outcomes of this study are suitable for ecosystem monitoring, especially in the realm of climate change. Further studies are necessary and should consider more factors, such as runoff and irrigation.

]]>Entropy doi: 10.3390/e19090497

Authors: Pei Yang Yue Wu Liqiang Jin Hongwen Yang

This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios. Based on the successive interference cancellation (SIC) and the entropy power Gaussian approximation, it is shown that both the multi-user MAC and BC can be approximated to a bank of parallel channels with the channel gains being modified by an extra attenuate factor that equals to the negative exponential of the capacity of interfering users. With this result, the capacity of MAC and BC with arbitrary number of users and arbitrary constellations can be easily calculated which in sharp contrast with using traditional Monte Carlo simulation that the calculating amount increases exponentially with the increase of the number of users. Further, the sum capacity of multi-user under different power allocation strategies including equal power allocation, equal capacity power allocation and maximum capacity power allocation is also investigated. For the equal capacity power allocation, a recursive relation for the solution of power allocation is derived. For the maximum capacity power allocation, the necessary condition for optimal power allocation is obtained and an optimal algorithm for the power allocation optimization problem is proposed based on the necessary condition.

]]>Entropy doi: 10.3390/e19090499

Authors: Eugenio Vogel Patricio Vargas Gonzalo Saravia Julio Valdes Antonio Ramirez-Pastor Paulo Centres

In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called the thermodynamic limit, but not all of the approximations done for those conditions can be extended to all properties in the case of objects with less than a thousand elements. The starting point is the Ising model in two dimensions (2D) where an analytic solution exits, which allows validating the numerical techniques used in the present article. From there on, we introduce several variations bearing in mind the small systems such as the nanoscopic or even subnanoscopic particles, which are nowadays produced for several applications. Magnetization is the main property investigated aimed for two singular possible devices. The size of the systems (number of magnetic sites) is decreased so as to appreciate the departure from the results valid in the thermodynamic limit; periodic boundary conditions are eliminated to approach the reality of small particles; 1D, 2D and 3D systems are examined to appreciate the differences established by dimensionality is this small world; upon diluting the lattices, the effect of coordination number (bonding) is also explored; since the 2D Ising model is equivalent to the clock model with q = 2 degrees of freedom, we combine previous results with the supplementary degrees of freedom coming from the variation of q up to q = 20 . Most of the previous results are numeric; however, for the case of a very small system, we obtain the exact partition function to compare with the conclusions coming from our numerical results. Conclusions can be summarized in the following way: the laws of thermodynamics remain the same, but the interpretation of the results, averages and numerical treatments need special care for systems with less than about a thousand constituents, and this might need to be adapted for different properties or devices.

]]>Entropy doi: 10.3390/e19090500

Authors: Jonathan Schrock Alex McCaskey Kathleen Hamilton Travis Humble Neena Imam

A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recall accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.

]]>Entropy doi: 10.3390/e19090352

Authors: Paulo Rotela Junior Luiz Rocha Giancarlo Aquila Pedro Balestrassi Rogério Peruchi Liviam Lacerda

Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in association with a model resulting from the insertion of the Entropy function directly into the optimization procedure. First, the DEA model was applied to perform a pre-selection of the assets. Then, assets given as efficient were submitted to the proposed model, resulting from the insertion of the Entropy function into the simplified Sharpe’s portfolio optimization model. As a result, an improved asset participation was provided in the portfolio. In the DEA model, several variables were evaluated and a low value of beta was achieved, guaranteeing greater robustness to the portfolio. Entropy function has provided not only greater diversity but also more feasible asset allocation. Additionally, the proposed method has obtained a better portfolio performance, measured by the Sharpe Ratio, in relation to the comparative methods.

]]>Entropy doi: 10.3390/e19090496

Authors: Jerry Gibson Preethi Mahadevan

We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech.

]]>Entropy doi: 10.3390/e19090495

Authors: Subhajit Majhi Patrick Mitran

We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the incumbent microwave band. The mm-wave band has a key modeling feature: due to severe path loss and relatively small wavelength, a transmitter must employ highly directional antenna arrays to reach its desired receiver. This feature causes the mm-wave channels to become highly directional, and thus can be used by a transmitter to transmit to its designated receiver or the other receiver. We consider two classes of such channels, where the underlying GIC in the microwave band has weak and strong interference, and obtain sufficient channel conditions under which the capacity is characterized. Moreover, we assess the impact of the additional mm-wave band spectrum on the performance, by characterizing the transmit power allocation for the direct and cross channels that maximizes the sum-rate of this dual-band channel. The solution reveals conditions under which different power allocations, such as allocating the power budget only to direct or only to cross channels, or sharing it among them, becomes optimal.

]]>Entropy doi: 10.3390/e19090493

Authors: Steeve Zozor David Puertas-Centeno Jesús Dehesa

Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential.

]]>Entropy doi: 10.3390/e19090492

Authors: Bjarne Andresen Christopher Essex

We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we wish to understand we need a set of variables which are linked through closed equations, i.e., everything can meaningfully be described in terms of those variables without the need to investigate other scales. Examples from physics, chemistry, and evolution are presented.

]]>Entropy doi: 10.3390/e19090494

Authors: Michael Wibral Conor Finn Patricia Wollstadt Joseph Lizier Viola Priesemann

Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems.

]]>Entropy doi: 10.3390/e19090489

Authors: Lianrong Zheng Weifeng Pan Yifan Li Daiyi Luo Qian Wang Guanzheng Liu

Obstructive sleep apnea (OSA) is a common sleep disorder that often associates with reduced heart rate variability (HRV) indicating autonomic dysfunction. HRV is mainly composed of high frequency components attributed to parasympathetic activity and low frequency components attributed to sympathetic activity. Although, time domain and frequency domain features of HRV have been used to sleep studies, the complex interaction between nonlinear independent frequency components with OSA is less known. This study included 30 electrocardiogram recordings (20 OSA patient recording and 10 healthy subjects) with apnea or normal label in 1-min segment. All segments were divided into three groups: N-N group (normal segments of normal subjects), P-N group (normal segments of OSA subjects) and P-OSA group (apnea segments of OSA subjects). Frequency domain indices and interaction indices were extracted from segmented RR intervals. Frequency domain indices included nuLF, nuHF, and LF/HF ratio; interaction indices included mutual information (MI) and transfer entropy (TE (H→L) and TE (L→H)). Our results demonstrated that LF/HF ratio was significant higher in P-OSA group than N-N group and P-N group. MI was significantly larger in P-OSA group than P-N group. TE (H→L) and TE (L→H) showed a significant decrease in P-OSA group, compared to P-N group and N-N group. TE (H→L) were significantly negative correlation with LF/HF ratio in P-N group (r = −0.789, p = 0.000) and P-OSA group (r = −0.661, p = 0.002). Our results indicated that MI and TE is powerful tools to evaluate sympathovagal modulation in OSA. Moreover, sympathovagal modulation is more imbalance in OSA patients while suffering from apnea event compared to free event.

]]>Entropy doi: 10.3390/e19090491

Authors: Linda Senigagliesi Marco Baldi Franco Chiaraluce

We propose and assess an on–off protocol for communication over wireless wiretap channels with security at the physical layer. By taking advantage of suitable cryptographic primitives, the protocol we propose allows two legitimate parties to exchange confidential messages with some chosen level of semantic security against passive eavesdroppers, and without needing either pre-shared secret keys or public keys. The proposed method leverages the noisy and fading nature of the channel and exploits coding and all-or-nothing transforms to achieve the desired level of semantic security. We show that the use of fake packets in place of skipped transmissions during low channel quality periods yields significant advantages in terms of time needed to complete transmission of a secret message. Numerical examples are provided considering coding and modulation schemes included in the WiMax standard, thus showing that the proposed approach is feasible even with existing practical devices.

]]>Entropy doi: 10.3390/e19090488

Authors: Mohit Kumar Ram Pachori U. Acharya

Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG. Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method is proposed in this work. First, the segmentation of ECG signals into beats is performed. Then, FAWT is applied to each ECG beat, which decomposes them into subband signals. Sample entropy (SEnt) is computed from these subband signals and fed to the random forest (RF), J48 decision tree, back propagation neural network (BPNN), and least-squares support vector machine (LS-SVM) classifiers to choose the highest performing one. We have achieved highest classification accuracy of 99.31% using LS-SVM classifier. We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance. The proposed automated method can be installed in the intensive care units (ICUs) of hospitals to aid the clinicians in confirming their diagnosis.

]]>Entropy doi: 10.3390/e19090484

Authors: Anming Dong Haixia Zhang Minglei Shu Dongfeng Yuan

This paper considers power splitting (PS)-based simultaneous wireless information and power transfer (SWIPT) for multiple-input multiple-output (MIMO) interference channel networks where multiple transceiver pairs share the same frequency spectrum. As the PS model is adopted, an individual receiver splits the received signal into two parts for information decoding (ID) and energy harvesting (EH), respectively. Aiming to minimize the total transmit power, transmit precoders, receive filters and PS ratios are jointly designed under a predefined signal-to-interference-plus-noise ratio (SINR) and EH constraints. The formulated joint transceiver design and power splitting problem is non-convex and thus difficult to solve directly. In order to effectively obtain its solution, the feasibility conditions of the formulated non-convex problem are first analyzed. Based on the analysis, an iterative algorithm is proposed by alternatively optimizing the transmitters together with the power splitting factors and the receivers based on semidefinite programming (SDP) relaxation. Moreover, considering the prohibitive computational cost of the SDP for practical applications, a low-complexity suboptimal scheme is proposed by separately designing interference-suppressing transceivers based on interference alignment (IA) and optimizing the transmit power allocation together with splitting factors. The transmit power allocation and receive power splitting problem is then recast as a convex optimization problem and solved efficiently. To further reduce the computational complexity, a low-complexity scheme is proposed by calculating the transmit power allocation and receive PS ratios in closed-form. Simulation results show the effectiveness of the proposed schemes in achieving SWIPT for MIMO interference channel (IC) networks.

]]>Entropy doi: 10.3390/e19090479

Authors: Arnab Ghosh

In the spirit of Bose–Einstein condensation, we present a detailed account of the statistical description of the condensation phenomena for a Fermi–Dirac gas following the works of Born and Kothari. For bosons, while the condensed phase below a certain critical temperature, permits macroscopic occupation at the lowest energy single particle state, for fermions, due to Pauli exclusion principle, the condensed phase occurs only in the form of a single occupancy dense modes at the highest energy state. In spite of these rudimentary differences, our recent findings [Ghosh and Ray, 2017] identify the foregoing phenomenon as condensation-like coherence among fermions in an analogous way to Bose–Einstein condensate which is collectively described by a coherent matter wave. To reach the above conclusion, we employ the close relationship between the statistical methods of bosonic and fermionic fields pioneered by Cahill and Glauber. In addition to our previous results, we described in this mini-review that the highest momentum (energy) for individual fermions, prerequisite for the condensation process, can be specified in terms of the natural length and energy scales of the problem. The existence of such condensed phases, which are of obvious significance in the context of elementary particles, have also been scrutinized.

]]>Entropy doi: 10.3390/e19090485

Authors: Joseph Smiga Jacob Taylor

The simplest cosmology—the Friedmann–Robertson–Walker–Lemaître (FRW) model— describes a spatially homogeneous and isotropic universe where the scale factor is the only dynamical parameter. Here we consider how quantized electromagnetic fields become entangled with the scale factor in a toy version of the FRW model. A system consisting of a photon, source, and detector is described in such a universe, and we find that the detection of a redshifted photon by the detector system constrains possible scale factor superpositions. Thus, measuring the redshift of the photon is equivalent to a weak measurement of the underlying cosmology. We also consider a potential optomechanical analogy system that would enable experimental exploration of these concepts. The analogy focuses on the effects of photon redshift measurement as a quantum back-action on metric variables, where the position of a movable mirror plays the role of the scale factor. By working in the rotating frame, an effective Hubble equation can be simulated with a simple free moving mirror.

]]>Entropy doi: 10.3390/e19090486

Authors: José-Luis Muñoz-Cobo Rafael Mendizábal Arturo Miquel Cesar Berna Alberto Escrivá

The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model parameters is the application of methods based on the maximum entropy principle (MEP) and the maximum relative entropy (MREP). These methods determine the PDF that maximizes the information entropy when only partial information about the parameter distribution is known, such as some moments of the distribution and its support. In addition, this paper shows the application of the MREP to update the PDF when the parameter must fulfill some technical specifications (TS) imposed by the regulations. Three computer programs have been developed: GEDIPA, which provides the parameter PDF using empirical distribution function (EDF) methods; UNTHERCO, which performs the Monte Carlo sampling on the parameter distribution; and DCP, which updates the PDF considering the TS and the MREP. Finally, the paper displays several applications and examples for the determination of the PDF applying the MEP and the MREP, and the influence of several factors on the PDF.

]]>Entropy doi: 10.3390/e19090483

Authors: Isabella Gallino

In contrast to pure metals and most non-glass forming alloys, metallic glass-formers are moderately strong liquids in terms of fragility. The notion of fragility of an undercooling liquid reflects the sensitivity of the viscosity of the liquid to temperature changes and describes the degree of departure of the liquid kinetics from the Arrhenius equation. In general, the fragility of metallic glass-formers increases with the complexity of the alloy with differences between the alloy families, e.g., Pd-based alloys being more fragile than Zr-based alloys, which are more fragile than Mg-based alloys. Here, experimental data are assessed for 15 bulk metallic glasses-formers including the novel and technologically important systems based on Ni-Cr-Nb-P-B, Fe-Mo-Ni-Cr-P-C-B, and Au-Ag-Pd-Cu-Si. The data for the equilibrium viscosity are analyzed using the Vogel–Fulcher–Tammann (VFT) equation, the Mauro–Yue–Ellison–Gupta–Allan (MYEGA) equation, and the Adam–Gibbs approach based on specific heat capacity data. An overall larger trend of the excess specific heat for the more fragile supercooled liquids is experimentally observed than for the stronger liquids. Moreover, the stronger the glass, the higher the free enthalpy barrier to cooperative rearrangements is, suggesting the same microscopic origin and rigorously connecting the kinetic and thermodynamic aspects of fragility.

]]>Entropy doi: 10.3390/e19090480

Authors: Andrea Grigorescu Holger Boche Rafael Schaefer

Robust biometric authentication is studied from an information theoretic perspective. Compound sources are used to account for uncertainty in the knowledge of the source statistics and are further used to model certain attack classes. It is shown that authentication is robust against source uncertainty and a special class of attacks under the strong secrecy condition. A single-letter characterization of the privacy secrecy capacity region is derived for the generated and chosen secret key model. Furthermore, the question is studied whether small variations of the compound source lead to large losses of the privacy secrecy capacity region. It is shown that biometric authentication is robust in the sense that its privacy secrecy capacity region depends continuously on the compound source.

]]>Entropy doi: 10.3390/e19090482

Authors: Hyenkyun Woo

In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems.

]]>Entropy doi: 10.3390/e19090481

Authors: Muhammad Bhatti Mohsen Sheikholeslami Ahmed Zeeshan

A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in the Cartesian coordinate system. The effects of viscous dissipation, heat absorption, magnetic field, and electrokinetic body force are also taken into account. The electric field terms are helpful to model the electrical potential terms by means of Poisson–Boltzmann equations, ionic Nernst–Planck equation, and Debye length approximation. A perturbation method has been applied to solve the coupled nonlinear partial differential equations and a series solution is obtained up to second order. The physical behavior of all the governing parameters is discussed for pressure rise, velocity profile, entropy profile, and temperature profile.

]]>Entropy doi: 10.3390/e19090478

Authors: Feifan Zhang Huayong Zhang Tousheng Huang Tianxiang Meng Shengnan Ma

Wind-induced vegetation patterns were proposed a long time ago but only recently a dynamic vegetation-sand relationship has been established. In this research, we transformed the continuous vegetation-sand model into a discrete model. Fixed points and stability analyses were then studied. Bifurcation analyses are done around the fixed point, including Neimark-Sacker and Turing bifurcation. Then we simulated the parameter space for both bifurcations. Based on the bifurcation conditions, simulations are carried out around the bifurcation point. Simulation results showed that Neimark-Sacker bifurcation and Turing bifurcation can induce the self-organization of complex vegetation patterns, among which labyrinth and striped patterns are the key results that can be presented by the continuous model. Under the coupled effects of the two bifurcations, simulation results show that vegetation patterns can also be self-organized, but vegetation type changed. The type of the patterns can be Turing type, Neimark-Sacker type, or some other special type. The difference may depend on the relative intensity of each bifurcation. The calculation of entropy may help understand the variance of pattern types.

]]>Entropy doi: 10.3390/e19090442

Authors: George Thomas Manik Banik Sibasish Ghosh

We study coupled quantum systems as the working media of thermodynamic machines. Under a suitable phase-space transformation, the coupled systems can be expressed as a composition of independent subsystems. We find that for the coupled systems, the figures of merit, that is the efficiency for engine and the coefficient of performance for refrigerator, are bounded (both from above and from below) by the corresponding figures of merit of the independent subsystems. We also show that the optimum work extractable from a coupled system is upper bounded by the optimum work obtained from the uncoupled system, thereby showing that the quantum correlations do not help in optimal work extraction. Further, we study two explicit examples; coupled spin- 1 / 2 systems and coupled quantum oscillators with analogous interactions. Interestingly, for particular kind of interactions, the efficiency of the coupled oscillators outperforms that of the coupled spin- 1 / 2 systems when they work as heat engines. However, for the same interaction, the coefficient of performance behaves in a reverse manner, while the systems work as the refrigerator. Thus, the same coupling can cause opposite effects in the figures of merit of heat engine and refrigerator.

]]>Entropy doi: 10.3390/e19090477

Authors: Linh Ma Jaehyung Park Jiseung Nam HoYong Ryu Jinsul Kim

Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming fluctuates along with the network changes, and it might reduce the quality of service. In recent years, many researchers have proposed several adaptive streaming algorithms to reduce such changes. However, these algorithms only consider the current state of a network. Thus, these algorithms might result in inaccurate estimates of a video quality in the near term. Therefore, in this paper, we propose a method using fuzzy logic and a mathematics moving average technique, in order to reduce mobile video quality fluctuation in Dynamic Adaptive Streaming over HTTP (DASH). First, we calculate the moving average of the bandwidth and buffer values for a given period. On the basis of differences between real and average values, we propose a fuzzy logic system to deduce the value of the video quality representation for the next request. In addition, we use the entropy rate of a bandwidth measurement sequence to measure the predictable/stabilization of our method. The experiment results show that our proposed method reduces video quality fluctuation as well as improves 40% of bandwidth utilization compared to existing methods.

]]>Entropy doi: 10.3390/e19090235

Authors: Emmanuel Abbe Jiange Li Mokshay Madiman

In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any real number additively, but not too much multiplicatively; on the other hand, for Z / 3 Z , the entropy of the difference is always at least as large as that of the sum. These results are closely related to the study of more-sums-than-differences (i.e., MSTD) sets in additive combinatorics. We also investigate polar codes for q-ary input channels using non-canonical kernels to construct the generator matrix and present applications of our results to constructing polar codes with significantly improved error probability compared to the canonical construction.

]]>Entropy doi: 10.3390/e19090475

Authors: Francesco Villecco Arcangelo Pellegrino

In this paper, the problem of the evaluation of the uncertainties that originate in the complex design process of a new system is analyzed, paying particular attention to multibody mechanical systems. To this end, the Wiener-Shannon’s axioms are extended to non-probabilistic events and a theory of information for non-repetitive events is used as a measure of the reliability of data. The selection of the solutions consistent with the values of the design constraints is performed by analyzing the complexity of the relation matrix and using the idea of information in the metric space. Comparing the alternatives in terms of the amount of entropy resulting from the various distribution, this method is capable of finding the optimal solution that can be obtained with the available resources. In the paper, the algorithmic steps of the proposed method are discussed and an illustrative numerical example is provided.

]]>Entropy doi: 10.3390/e19090474

Authors: Tycho Tax Pedro Mediano Murray Shanahan

In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations.

]]>Entropy doi: 10.3390/e19090473

Authors: Mario Ciampini Paolo Mataloni Mauro Paternostro

Quantum networks are natural scenarios for the communication of information among distributed parties, and the arena of promising schemes for distributed quantum computation. Measurement-based quantum computing is a prominent example of how quantum networking, embodied by the generation of a special class of multipartite states called cluster states, can be used to achieve a powerful paradigm for quantum information processing. Here we analyze randomly generated cluster states in order to address the emergence of correlations as a function of the density of edges in a given underlying graph. We find that the most widespread multipartite entanglement does not correspond to the highest amount of edges in the cluster. We extend the analysis to higher dimensions, finding similar results, which suggest the establishment of small world structures in the entanglement sharing of randomised cluster states, which can be exploited in engineering more efficient quantum information carriers.

]]>Entropy doi: 10.3390/e19090469

Authors: Alok Maity Pinaki Chaudhury Suman Banik

Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals cross-talk is responsible for developing coordinated fluctuations between the pathways. The extent of correlation evaluated in terms of the information theoretic measure provides directionality to net information propagation. Stochastic time series suggest that the cross-talk generates synchronisation in a cell. In addition, the cross-interaction develops correlation between two different phosphorylated kinases expressed in each of the cells in a population of genetically identical cells. Depending on the number of inputs and outputs, we identify signal integration and signal bifurcation motif that arise due to inter-pathway connectivity in the composite network. Analysis using partial information decomposition, an extended formalism of multivariate information calculation, also quantifies the net synergy in the information propagation through the branched pathways. Under this formalism, signature of synergy or redundancy is observed due to the architectural difference in the branched pathways.

]]>Entropy doi: 10.3390/e19090472

Authors: Feng Chen Yi Gao Michael Galperin

Recent developments in nanoscale experimental techniques made it possible to utilize single molecule junctions as devices for electronics and energy transfer with quantum coherence playing an important role in their thermoelectric characteristics. Theoretical studies on the efficiency of nanoscale devices usually employ rate (Pauli) equations, which do not account for quantum coherence. Therefore, the question whether quantum coherence could improve the efficiency of a molecular device cannot be fully addressed within such considerations. Here, we employ a nonequilibrium Green function approach to study the effects of quantum coherence and dephasing on the thermoelectric performance of molecular heat engines. Within a generic bichromophoric donor-bridge-acceptor junction model, we show that quantum coherence may increase efficiency compared to quasi-classical (rate equation) predictions and that pure dephasing and dissipation destroy this effect.

]]>Entropy doi: 10.3390/e19090471

Authors: Yiming Fan Ling-Li Zeng Hui Shen Jian Qin Fuquan Li Dewen Hu

Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific maturational patterns. Normal aging is related to some resting state brain networks disruption, which are associated with certain cognitive decline. It is a big challenge to design an integral metric to track connectome evolution patterns across the lifespan, which is to understand the principles of network organization in the human brain. In this study, we first defined a brain network eigen-entropy (NEE) based on the energy probability (EP) of each brain node. Next, we used the NEE to characterize the lifespan orderness trajectory of the whole-brain functional connectivity of 173 healthy individuals ranging in age from 7 to 85 years. The results revealed that during the lifespan, the whole-brain NEE exhibited a significant non-linear decrease and that the EP distribution shifted from concentration to wide dispersion, implying orderness enhancement of functional connectome over age. Furthermore, brain regions with significant EP changes from the flourishing (7–20 years) to the youth period (23–38 years) were mainly located in the right prefrontal cortex and basal ganglia, and were involved in emotion regulation and executive function in coordination with the action of the sensory system, implying that self-awareness and voluntary control performance significantly changed during neurodevelopment. However, the changes from the youth period to middle age (40–59 years) were located in the mesial temporal lobe and caudate, which are associated with long-term memory, implying that the memory of the human brain begins to decline with age during this period. Overall, the findings suggested that the human connectome shifted from a relatively anatomical driven state to an orderly organized state with lower entropy.

]]>Entropy doi: 10.3390/e19090470

Authors: Yan Jin Juan Du Zhiyuan Li Hongwu Zhang

Several fundamental concepts with respect to the second-law analysis (SLA) of the turbulent flows in gas turbines are discussed in this study. Entropy and exergy equations for compressible/incompressible flows in a rotating/non-rotating frame have been derived. The exergy transformation efficiency of a gas turbine as well as the exergy transformation number for a single process step have been proposed. The exergy transformation number will indicate the overall performance of a single process in a gas turbine, including the local irreversible losses in it and its contribution to the exergy obtained the combustion chamber. A more general formula for calculating local entropy generation rate densities is suggested. A test case of a compressor cascade has been employed to demonstrate the application of the developed concepts.

]]>Entropy doi: 10.3390/e19090468

Authors: Chol Kang Michelangelo Naim Vezha Boboeva Alessandro Treves

We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching in the phase space spanned by the number of Potts states S, the number of connections per Potts unit C and the number of stored memory patterns p. We find narrow regions, or bands in phase space, where distinct pattern retrieval and duration of latching combine to yield the highest values of Q. The bands are confined by the storage capacity curve, for large p, and by the onset of finite latching, for low p. Inside the band, in the slowly adapting regime, we observe complex structured dynamics, with transitions at high crossover between correlated memory patterns; while away from the band latching, transitions lose complexity in different ways: below, they are clear-cut but last such few steps as to span a transition matrix between states with few asymmetrical entries and limited entropy; while above, they tend to become random, with large entropy and bi-directional transition frequencies, but indistinguishable from noise. Extrapolating from the simulations, the band appears to scale almost quadratically in the p–S plane, and sublinearly in p–C. In the fast adapting regime, the band scales similarly, and it can be made even wider and more robust, but transitions between anti-correlated patterns dominate latching dynamics. This suggest that slow and fast adaptation have to be integrated in a scenario for viable latching in a cortical system. The results for the slowly adapting regime, obtained with randomly correlated patterns, remain valid also for the case with correlated patterns, with just a simple shift in phase space.

]]>Entropy doi: 10.3390/e19090467

Authors: Avihay Sadeh-Shirazi Uria Basher Haim Permuter

Let ( S 1 , i , S 2 , i ) ∼ i . i . d p ( s 1 , s 2 ) , i = 1 , 2 , ⋯ be a memoryless, correlated partial side information sequence. In this work, we study channel coding and source coding problems where the partial side information ( S 1 , S 2 ) is available at the encoder and the decoder, respectively, and, additionally, either the encoder’s or the decoder’s side information is increased by a limited-rate description of the other’s partial side information. We derive six special cases of channel coding and source coding problems and we characterize the capacity and the rate-distortion functions for the different cases. We present a duality between the channel capacity and the rate-distortion cases we study. In order to find numerical solutions for our channel capacity and rate-distortion problems, we use the Blahut-Arimoto algorithm and convex optimization tools. Finally, we provide several examples corresponding to the channel capacity and the rate-distortion cases we presented.

]]>Entropy doi: 10.3390/e19090465

Authors: Maria Bertotti Giovanni Modanese

Why does the Maxwell-Boltzmann energy distribution for an ideal classical gas have an exponentially thin tail at high energies, while the Kaniadakis distribution for a relativistic gas has a power-law fat tail? We argue that a crucial role is played by the kinematics of the binary collisions. In the classical case the probability of an energy exchange far from the average (i.e., close to 0% or 100%) is quite large, while in the extreme relativistic case it is small. We compare these properties with the concept of “saving propensity”, employed in econophysics to define the fraction of their money that individuals put at stake in economic interactions.

]]>Entropy doi: 10.3390/e19090466

Authors: Kab-Mun Cha Nitish Thakor Hyun-Chool Shin

In this paper, we propose novel quantitative electroencephalogram (qEEG) measures by exploiting three critical and distinct phases (isoelectric, fast progression, and slow progression) of qEEG time evolution. Critical time points where the phase transition occurs are calculated. Most conventional measures have two major disadvantages. Firstly, to obtain meaningful time-evolution over raw electroencephalogram (EEG), these measures require baseline EEG activities before the subject’s injury. Secondly, conventional qEEG measures need at least 2∼3 h recording of EEG signals to predict meaningful long-term neurological outcomes. Unlike the conventional qEEG measures, the two measures do not require the baseline EEG information before injury and furthermore can be calculated only with the EEG data of 20∼30 min after cardiopulmonary resuscitation (CPR).

]]>Entropy doi: 10.3390/e19090464

Authors: Ibai Mugica Steven Roy Sébastien Poncet Jonathan Bouchard Hakim Nesreddine

This paper analyzes the energetic and exergy performance of an active magnetic regenerative refrigerator using water-based Al2O3 nanofluids as heat transfer fluids. A 1D numerical model has been extensively used to quantify the exergy performance of a system composed of a parallel-plate regenerator, magnetic source, pump, heat exchangers and control valves. Al2O3-water based nanofluids are tested thanks to CoolProp library, accounting for temperature-dependent properties, and appropriate correlations. The results are discussed in terms of the coefficient of performance, the exergy efficiency, and the cooling power as a function of the nanoparticle volume fraction and blowing time for a given geometrical configuration. It is shown that while the heat transfer between the fluid and solid is enhanced, it is accompanied by smaller temperature gradients within the fluid and larger pressure drops when increasing the nanoparticle concentration. It leads in all configurations to lower performance compared to the base case with pure liquid water.

]]>Entropy doi: 10.3390/e19090463

Authors: Renat Sibatov Vadim Shulezhko Vyacheslav Svetukhin

Anomalous advection-diffusion in two-dimensional semiconductor systems with coexisting energetic and structural disorder is described in the framework of a generalized model of multiple trapping on a comb-like structure. The basic equations of the model contain fractional-order derivatives. To validate the model, we compare analytical solutions with results of a Monte Carlo simulation of phonon-assisted tunneling in two-dimensional patterns of a porous nanoparticle agglomerate and a phase-separated bulk heterojunction. To elucidate the role of directed percolation, we calculate transient current curves of the time-of-flight experiment and the evolution of the mean squared displacement averaged over medium realizations. The variations of the anomalous advection-diffusion parameters as functions of electric field intensity, levels of energetic, and structural disorder are presented.

]]>Entropy doi: 10.3390/e19090462

Authors: Shuai Chang Jialun Li Xiaomei Fu Liang Zhang

Energy harvesting (EH) has attracted a lot of attention in cooperative communication networks studies for its capability of transferring energy from sources to relays. In this paper, we study the secrecy capacity of a cooperative compressed sensing amplify and forward (CCS-AF) wireless network in the presence of eavesdroppers based on an energy harvesting protocol. In this model, the source nodes send their information to the relays simultaneously, and then the relays perform EH from the received radio-frequency signals based on the power splitting-based relaying (PSR) protocol. The energy harvested by the relays will be used to amplify and forward the received information to the destination. The impacts of some key parameters, such as the power splitting ratio, energy conversion efficiency, relay location, and the number of relays, on the system secrecy capacity are analyzed through a group of experiments. Simulation results reveal that under certain conditions, the proposed EH relaying scheme can achieve higher secrecy capacity than traditional relaying strategies while consuming equal or even less power.

]]>Entropy doi: 10.3390/e19090461

Authors: Alyssa Adams Angelica Berner Paul Davies Sara Walker

A major conceptual step forward in understanding the logical architecture of living systems was advanced by von Neumann with his universal constructor, a physical device capable of self-reproduction. A necessary condition for a universal constructor to exist is that the laws of physics permit physical universality, such that any transformation (consistent with the laws of physics and availability of resources) can be caused to occur. While physical universality has been demonstrated in simple cellular automata models, so far these have not displayed a requisite feature of life—namely open-ended evolution—the explanation of which was also a prime motivator in von Neumann’s formulation of a universal constructor. Current examples of physical universality rely on reversible dynamical laws, whereas it is well-known that living processes are dissipative. Here we show that physical universality and open-ended dynamics should both be possible in irreversible dynamical systems if one entertains the possibility of state-dependent laws. We demonstrate with simple toy models how the accessibility of state space can yield open-ended trajectories, defined as trajectories that do not repeat within the expected Poincaré recurrence time and are not reproducible by an isolated system. We discuss implications for physical universality, or an approximation to it, as a foundational framework for developing a physics for life.

]]>Entropy doi: 10.3390/e19090430

Authors: Tudorel Andrei Bogdan Oancea Peter Richmond Gurjeet Dhesi Claudiu Herteliu

This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens.

]]>Entropy doi: 10.3390/e19090460

Authors: Luis Estrada Abel Torres Leonardo Sarlabous Raimon Jané

Fixed sample entropy (fSampEn) is a robust technique that allows the evaluation of inspiratory effort in diaphragm electromyography (EMGdi) signals, and has potential utility in sleep studies. To appropriately estimate respiratory effort, fSampEn requires the adjustment of several parameters. The aims of the present study were to evaluate the influence of the embedding dimension m, the tolerance value r, the size of the moving window, and the sampling frequency, and to establish recommendations for estimating the respiratory activity when using the fSampEn on surface EMGdi recorded for different inspiratory efforts. Values of m equal to 1 and r ranging from 0.1 to 0.64, and m equal to 2 and r ranging from 0.13 to 0.45, were found to be suitable for evaluating respiratory activity. fSampEn was less affected by window size than classical amplitude parameters. Finally, variations in sampling frequency could influence fSampEn results. In conclusion, the findings suggest the potential utility of fSampEn for estimating muscle respiratory effort in further sleep studies.

]]>Entropy doi: 10.3390/e19090456

Authors: Keyan Ghazi-Zahedi Carlotta Langer Nihat Ay

There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that would otherwise have to be conducted by the brain. In this paper, we argue for a synergistic perspective, and by that we mean that Morphological Computation is a process which requires a close interaction of body and brain. Based on a model of the sensorimotor loop, we study a new measure of synergistic information and show that it is more reliable in cases in which there is no synergistic information, compared to previous results. Furthermore, we discuss an algorithm that allows the calculation of the measure in non-trivial (non-binary) systems.

]]>Entropy doi: 10.3390/e19090457

Authors: Constantino Tsallis

The Boltzmann–Gibbs (BG) entropy and its associated statistical mechanics were generalized, three decades ago, on the basis of the nonadditive entropy S q ( q ∈ R ), which recovers the BG entropy in the q → 1 limit. The optimization of S q under appropriate simple constraints straightforwardly yields the so-called q-exponential and q-Gaussian distributions, respectively generalizing the exponential and Gaussian ones, recovered for q = 1 . These generalized functions ubiquitously emerge in complex systems, especially as economic and financial stylized features. These include price returns and volumes distributions, inter-occurrence times, characterization of wealth distributions and associated inequalities, among others. Here, we briefly review the basic concepts of this q-statistical generalization and focus on its rapidly growing applications in economics and finance.

]]>Entropy doi: 10.3390/e19090459

Authors: Francisco De Sousa Lima

Through Monte Carlo simulations, we studied the critical properties of kinetic models of continuous opinion dynamics on ( 3 , 4 , 6 , 4 ) and ( 3 4 , 6 ) Archimedean lattices. We obtain p c and the critical exponents’ ratio from extensive Monte Carlo studies and finite size scaling. The calculated values of the critical points and Binder cumulant are p c = 0 . 085 ( 6 ) and O 4 * = 0 . 605 ( 9 ) ; and p c = 0 . 146 ( 5 ) and O 4 * = 0 . 606 ( 3 ) for ( 3 , 4 , 6 , 4 ) and ( 3 4 , 6 ) lattices, respectively, while the exponent ratios β / ν , γ / ν and 1 / ν are, respectively: 0 . 126 ( 1 ) , 1 . 50 ( 7 ) , and 0 . 90 ( 5 ) for ( 3 , 4 , 6 , 4 ); and 0 . 125 ( 3 ) , 1 . 54 ( 6 ) , and 0 . 99 ( 3 ) for ( 3 4 , 6 ) lattices. Our new results agree with majority-vote model on previously studied regular lattices and disagree with the Ising model on square-lattice.

]]>Entropy doi: 10.3390/e19090458

Authors: Fabrizio Tamburini Mariafelicia Laurentis Ignazio Licata Bo Thidé

Background: The Hawking–Perry–Strominger (HPS) work states a new controversial idea about the black hole (BH) information paradox , where BHs maximally entropize and encode information in their event horizon area , with no “hair” thought to reveal information outside but angular momentum, mass, and electric charge only in a unique quantum gravity (QG) vacuum state. New conservation laws of gravitation and electromagnetism , appear to generate different QG vacua, preserving more information in soft photon/graviton hair implants. We find that BH photon hair implants can encode orbital angular momentum (OAM) and vorticity of the electromagnetic (EM) field. Methods: Numerical simulations are used to plot an EM field with OAM emitted by a set of dipolar currents together with the soft photon field they induce. The analytical results confirm that the soft photon hair implant carries OAM and vorticity. Results: a set of charges and currents generating real EM fields with precise values of OAM induce a “curly”, twisted, soft-hair implant on the BH with vorticity and OAM increased by one unit with respect to the initial real field. Conclusions: Soft photon implants can be spatially shaped ad hoc, encoding structured and densely organized information on the event horizon.

]]>Entropy doi: 10.3390/e19090455

Authors: Martin Tamm

In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws) and weak versions (which can be compatible with time symmetric microscopic laws) are given. This is part of a larger project that aims to explain the arrows as consequences of a common time symmetric principle in the set of all possible universes. However, even if we accept that both arrows may have the same origin, this does not imply that they are equivalent, and it is argued that there can be situations where one arrow may be well-defined but the other is not.

]]>Entropy doi: 10.3390/e19090443

Authors: Yikun Wei Zhengdao Wang Yuehong Qian

Entropy generation in two-dimensional Rayleigh-Bénard convection at different Prandtl number (Pr) are investigated in the present paper by using the lattice Boltzmann Method. The major concern of the present paper is to explore the effects of Pr on the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field. The results of this work indicate that the significant viscous entropy generation rates (Su) gradually expand to bulk contributions of cavity with the increase of Pr, thermal entropy generation rates (Sθ) and total entropy generation rates (S) mainly concentrate in the steepest temperature gradient, the entropy generation in the flow is dominated by heat transfer irreversibility and for the same Rayleigh number, the amplitudes of Su, Sθ and S decrease with increasing Pr. It is found that that the amplitudes of the horizontally averaged viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates decrease with increasing Pr. The probability density functions of Su, Sθ and S also indicate that a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr. The distribution and the departure from log-normality become robust with decreasing Pr.

]]>Entropy doi: 10.3390/e19090454

Authors: Jung Lee Jaekyum Kim Byeoungdo Kim Dongweon Yoon Jun Choi

In this paper, we propose a deep neural network (DNN)-based automatic modulation classification (AMC) for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN) channels, classification accuracy degrades for fading channels where the amplitude and phase of channel gain change in time. The key contributions of this paper are in two phases. First, we analyze the effectiveness of a variety of statistical features for AMC task in fading channels. We reveal that the features that are shown to be effective for fading channels are different from those known to be good for AWGN channels. Second, we introduce a new enhanced AMC technique based on DNN method. We use the extensive and diverse set of statistical features found in our study for the DNN-based classifier. The fully connected feedforward network with four hidden layers are trained to classify the modulation class for several fading scenarios. Numerical evaluation shows that the proposed technique offers significant performance gain over the existing AMC methods in fading channels.

]]>Entropy doi: 10.3390/e19090453

Authors: Markus Lips

This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN), including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs.

]]>Entropy doi: 10.3390/e19090452

Authors: Serkan Akogul Murat Erisoglu

To determine the number of clusters in the clustering analysis that has a broad range of applied sciences, such as physics, chemistry, biology, engineering, economics etc., many methods have been proposed in the literature. The aim of this paper is to determine the number of clusters of a dataset in a model-based clustering by using an Analytic Hierarchy Process (AHP). In this study, the AHP model has been created by using the information criteria Akaike’s Information Criterion (AIC), Approximate Weight of Evidence (AWE), Bayesian Information Criterion (BIC), Classification Likelihood Criterion (CLC), and Kullback Information Criterion (KIC). The achievement of the proposed approach has been tested on common real and synthetic datasets. The proposed approach based on the corresponding information criteria has produced accurate results. The currently produced results have been seen to be more accurate than those corresponding to the information criteria.

]]>Entropy doi: 10.3390/e19090451

Authors: Giuseppe Pica Eugenio Piasini Daniel Chicharro Stefano Panzeri

In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems. Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to describe the relationships among all PID lattices. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the system’s entropy, hence it characterizes completely all the interdependencies in the system.

]]>Entropy doi: 10.3390/e19090450

Authors: Joo Ho Choi Un Gu Kang Byung Mun Lee

There has been a growing interest in sleep management recently, and sleep care services using mobile or wearable devices are under development. However, devices with one sensor have limitations in analyzing various sleep states. If Internet of Things (IoT) technology, which collects information from multiple sensors and analyzes them in an integrated manner, can be used then various sleep states can be more accurately measured. Therefore, in this paper, we propose a Smart Model for Sleep Care to provide a service to measure and analyze the sleep state using various sensors. In this model, we designed and implemented a Sleep Information Gathering Protocol to transmit the information measured between physical sensors and sleep sensors. Experiments were conducted to compare the throughput and the consumed power of this new protocol with those of the protocols used in the existing service—we achieved the throughput of about two times and 20% reduction in power consumption, which has confirmed the effectiveness of the proposed protocol. We judge that this protocol is meaningful as it can be applied to a Smart Model for Sleep Care that incorporates IoT technology and allows expanded sleep care if used together with services for treating sleep disorders.

]]>Entropy doi: 10.3390/e19090449

Authors: Shingo Kukita Yasusada Nambu

We consider entanglement harvesting in de Sitter space using a model of multiple qubit detectors. We obtain the formula of the entanglement negativity for this system. Applying the obtained formula, we find that it is possible to access to the entanglement on the super horizon scale if a sufficiently large number of detectors are prepared. This result indicates the effect of the multipartite entanglement is crucial for detection of large scale entanglement in de Sitter space.

]]>Entropy doi: 10.3390/e19090448

Authors: Sebastian Ion Ceptureanu Eduard Gabriel Ceptureanu Irinel Marin

This paper investigates effects of strategic choice on organizational performance for Romanian family-owned Small and Medium sized Enterprises (SMEs). Using adapted Jacquemin–Berry entropy index for both product and international diversification and using a regression model, our study discusses family involvement as a moderating factor for organizational performance assessment. We discovered that there are multiple interactions between strategic choice and organizational performance while family involvement fails to have a significant role in moderating these interactions.

]]>Entropy doi: 10.3390/e19090447

Authors: Verónica Barroso-García Gonzalo Gutiérrez-Tobal Leila Kheirandish-Gozal Daniel Álvarez Fernando Vaquerizo-Villar Andrea Crespo Félix del Campo David Gozal Roberto Hornero

The aim of this paper is to evaluate the evolution of irregularity and variability of airflow (AF) signals as sleep apnoea-hypopnoea syndrome (SAHS) severity increases in children. We analyzed 501 AF recordings from children 6.2 ± 3.4 years old. The respiratory rate variability (RRV) signal, which is obtained from AF, was also estimated. The proposed methodology consisted of three phases: (i) extraction of spectral entropy (SE1), quadratic spectral entropy (SE2), cubic spectral entropy (SE3), and central tendency measure (CTM) to quantify irregularity and variability of AF and RRV; (ii) feature selection with forward stepwise logistic regression (FSLR), and (iii) classification of subjects using logistic regression (LR). SE1, SE2, SE3, and CTM were used to conduct exploratory analyses that showed increasing irregularity and decreasing variability in AF, and increasing variability in RRV as apnoea-hypopnoea index (AHI) was higher. These tendencies were clearer in children with a higher severity degree (from AHI ≥ 5 events/hour). Binary LR models achieved 60%, 76%, and 80% accuracy for the AHI cutoff points 1, 5, and 10 e/h, respectively. These results suggest that irregularity and variability measures are able to characterize paediatric SAHS in AF recordings. Hence, the use of these approaches could be helpful in automatically detecting SAHS in children.

]]>Entropy doi: 10.3390/e19090446

Authors: Yingbai Xie Kai Zhong

The performances of thermodynamics cycles are dependent on the properties of refrigerants. The performances of Vuilleumier (VM) cycle heat pump adopting mixture refrigerants are analyzed by MATLAB software using REFPROP programming. At given operating parameters and configuration, performances of the VM cycle adopting pure refrigerant, H2, He or N2 are compared. Thermodynamic properties of the four type mixtures, namely, He-H2, He-N2, H2-N2 and He-H2-N2, are obtained with total 16 mixing ratio, and the coefficient of performance and the exergy efficiency of these four mixture types in VM cycle heat pump are calculated. The results indicate that within the temperature of heat source 400–1000 K, helium is the best choice of pure refrigerant for VM cycle heat pump. The He-H2 mixture is the best among all binary refrigerant mixtures; the recommended proportion is 1:2. For trinary refrigerant mixture, suggested proportion of helium, hydrogen and nitrogen is 2:2:1. For these recommended mixtures, system COPs (coefficient of performances) are close to 3.3 and exergy efficiencies are about 0.2, which are close to pure refrigerant helium.

]]>Entropy doi: 10.3390/e19090445

Authors: Stefano Iubini Stefano Lepri Roberto Livi Gian-Luca Oppo Antonio Politi

We numerically investigate out-of-equilibrium stationary processes emerging in a Discrete Nonlinear Schrödinger chain in contact with a heat reservoir (a bath) at temperature T L and a pure dissipator (a sink) acting on opposite edges. Long-time molecular-dynamics simulations are performed by evolving the equations of motion within a symplectic integration scheme. Mass and energy are steadily transported through the chain from the heat bath to the sink. We observe two different regimes. For small heat-bath temperatures T L and chemical-potentials, temperature profiles across the chain display a non-monotonous shape, remain remarkably smooth and even enter the region of negative absolute temperatures. For larger temperatures T L , the transport of energy is strongly inhibited by the spontaneous emergence of discrete breathers, which act as a thermal wall. A strongly intermittent energy flux is also observed, due to the irregular birth and death of breathers. The corresponding statistics exhibit the typical signature of rare events of processes with large deviations. In particular, the breather lifetime is found to be ruled by a stretched-exponential law.

]]>Entropy doi: 10.3390/e19090444

Authors: Ellis Scharfenaker Duncan Foley

Social science addresses systems in which the individual actions of participants interacting in complex, non-additive ways through institutional structures determine social outcomes. In many cases, the institutions incorporate enough negative feedback to stabilize the resulting outcome as an equilibrium. We study a particular type of such equilibria, quantal response statistical equilibrium (QRSE) using the tools of constrained maximum entropy modeling developed by E. T. Jaynes. We use Adam Smith’s theory of profit rate maximization through competition of freely mobile capitals as an example. Even in many cases where key model variables are unobserved, it is possible to infer the parameters characterizing the equilibrium through Bayesian methods. We apply this method to the Smithian theory of competition using data where firms’ profit rates are observed but the entry and exit decisions that determine the distribution of profit rates is unobserved, and confirm Smith’s prediction of the emergence of an average rate of profit, along with a characterization of equilibrium statistical fluctuations of individual rates of profit.

]]>Entropy doi: 10.3390/e19090440

Authors: Konstantin Zhukovsky

Heat propagation in the Guyer–Krumhansl model is studied. The exact analytical solutions for the one-dimensional Guyer–Krumhansl equation are obtained. The operational formalism is employed. Some examples of initial functions are considered, modeling various initial heat pulses and distributions. The effect of the ballistic heat transfer in an over–diffusive regime is elucidated. The behavior of the solutions in such a regime is explored. The maximum principle and its violation for the obtained solutions are discussed in the framework of heat conduction. Examples of negative solutions for the Guyer–Krumhansl equation are demonstrated.

]]>Entropy doi: 10.3390/e19090340

Authors: Arefeh Kazemi Antonio Toral Andy Way Amirhassan Monadjemi Mohammadali Nematbakhsh

Reordering is one of the most important factors affecting the quality of the output in statistical machine translation (SMT). A considerable number of approaches that proposed addressing the reordering problem are discriminative reordering models (DRM). The core component of the DRMs is a classifier which tries to predict the correct word order of the sentence. Unfortunately, the relationship between classification quality and ultimate SMT performance has not been investigated to date. Understanding this relationship will allow researchers to select the classifier that results in the best possible MT quality. It might be assumed that there is a monotonic relationship between classification quality and SMT performance, i.e., any improvement in classification performance will be monotonically reflected in overall SMT quality. In this paper, we experimentally show that this assumption does not always hold, i.e., an improvement in classification performance might actually degrade the quality of an SMT system, from the point of view of MT automatic evaluation metrics. However, we show that if the improvement in the classification performance is high enough, we can expect the SMT quality to improve as well. In addition to this, we show that there is a negative relationship between classification accuracy and SMT performance in imbalanced parallel corpora. For these types of corpora, we provide evidence that, for the evaluation of the classifier, macro-averaged metrics such as macro-averaged F-measure are better suited than accuracy, the metric commonly used to date.

]]>Entropy doi: 10.3390/e19090439

Authors: Haikun Shang Kwok Lo Feng Li

Partial Discharge (PD) pattern recognition plays an important part in electrical equipment fault diagnosis and maintenance. Feature extraction could greatly affect recognition results. Traditional PD feature extraction methods suffer from high-dimension calculation and signal attenuation. In this study, a novel feature extraction method based on Ensemble Empirical Mode Decomposition (EEMD) and Sample Entropy (SamEn) is proposed. In order to reduce the influence of noise, a wavelet method is applied to PD de-noising. Noise Rejection Ratio (NRR) and Mean Square Error (MSE) are adopted as the de-noising indexes. With EEMD, the de-noised signal is decomposed into a finite number of Intrinsic Mode Functions (IMFs). The IMFs, which contain the dominant information of PD, are selected using a correlation coefficient method. From that, the SamEn of selected IMFs are extracted as PD features. Finally, a Relevance Vector Machine (RVM) is utilized for pattern recognition using the features extracted. Experimental results demonstrate that the proposed method combines excellent properties of both EEMD and SamEn. The recognition results are encouraging with satisfactory accuracy.

]]>Entropy doi: 10.3390/e19090436

Authors: Luca Lusanna

Till now, kinetic theory and statistical mechanics of either free or interacting point particles were well defined only in non-relativistic inertial frames in the absence of the long-range inertial forces present in accelerated frames. As shown in the introductory review at the relativistic level, only a relativistic kinetic theory of “world-lines” in inertial frames was known till recently due to the problem of the elimination of the relative times. The recent Wigner-covariant formulation of relativistic classical and quantum mechanics of point particles required by the theory of relativistic bound states, with the elimination of the problem of relative times and with a clarification of the notion of the relativistic center of mass, allows one to give a definition of the distribution function of the relativistic micro-canonical ensemble in terms of the generators of the Poincaré algebra of a system of interacting particles both in inertial and in non-inertial rest frames. The non-relativistic limit allows one to get the ensemble in non-relativistic non-inertial frames. Assuming the existence of a relativistic Gibbs ensemble, also a “Lorentz-scalar micro-canonical temperature” can be defined. If the forces between the particles are short range in inertial frames, the notion of equilibrium can be extended from them to the non-inertial rest frames, and it is possible to go to the thermodynamic limit and to define a relativistic canonical temperature and a relativistic canonical ensemble. Finally, assuming that a Lorentz-scalar one-particle distribution function can be defined with a statistical average, an indication is given of which are the difficulties in solving the open problem of deriving the relativistic Boltzmann equation with the same methodology used in the non-relativistic case instead of postulating it as is usually done. There are also some comments on how it would be possible to have a hydrodynamical description of the relativistic kinetic theory of an isolated fluid in local equilibrium by means of an effective relativistic dissipative fluid described in the Wigner-covariant framework.

]]>Entropy doi: 10.3390/e19090438

Authors: Anton Bardera Roger Bramon Marc Ruiz Imma Boada

How to extract relevant information from large data sets has become a main challenge in data visualization. Clustering techniques that classify data into groups according to similarity metrics are a suitable strategy to tackle this problem. Generally, these techniques are applied in the data space as an independent step previous to visualization. In this paper, we propose clustering on the perceptual space by maximizing the mutual information between the original data and the final visualization. With this purpose, we present a new information-theoretic framework based on the rate-distortion theory that allows us to achieve a maximally compressed data with a minimal signal distortion. Using this framework, we propose a methodology to design a visualization process that minimizes the information loss during the clustering process. Three application examples of the proposed methodology in different visualization techniques such as scatterplot, parallel coordinates, and summary trees are presented.

]]>Entropy doi: 10.3390/e19090437

Authors: Timothy Graves Robert Gramacy Nicholas Watkins Christian Franzke

Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the accuracy and confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here, we discuss the original motivation of the development of long memory and Mandelbrot’s influence on this fascinating field. We will also elucidate the sometimes contrasting approaches to long memory in different scientific communities.

]]>Entropy doi: 10.3390/e19090441

Authors: Hieu Do Tobias Oechtering Mikael Skoglund Mai Vu

This paper introduces and studies a model in which two relay channels interfere with each other. Motivated by practical scenarios in heterogeneous wireless access networks, each relay is assumed to be connected to its intended receiver through a digital link with finite capacity. Inner and outer bounds for achievable rates are derived and shown to be tight for new discrete memoryless classes, which generalize and unify several known cases involving interference and relay channels. Capacity region and sum capacity for multiple Gaussian scenarios are also characterized to within a constant gap. The results show the optimality or near-optimality of the quantize-bin-and-forward coding scheme for practically relevant relay-interference networks, which brings important engineering insight into the design of wireless communications systems.

]]>Entropy doi: 10.3390/e19090435

Authors: José de Barros Federico Holik Décio Krause

It is well known that in quantum mechanics we cannot always define consistently properties that are context independent. Many approaches exist to describe contextual properties, such as Contextuality by Default (CbD), sheaf theory, topos theory, and non-standard or signed probabilities. In this paper, we propose a treatment of contextual properties that is specific to quantum mechanics, as it relies on the relationship between contextuality and indistinguishability. In particular, we propose that if we assume the ontological thesis that quantum particles or properties can be indistinguishable yet different, no contradiction arising from a Kochen–Specker-type argument appears: when we repeat an experiment, we are in reality performing an experiment measuring a property that is indistinguishable from the first, but not the same. We will discuss how the consequences of this move may help us understand quantum contextuality.

]]>Entropy doi: 10.3390/e19090434

Authors: Grégoire Nicolis Yannick De Decker

A stochastic thermodynamics of Brownian motion is set up in which state functions are expressed in terms of state variables through the same relations as in classical irreversible thermodynamics, with the difference that the state variables are now random fields accounting for the effect of fluctuations. Explicit expressions for the stochastic analog of entropy production and related quantities are derived for a dilute solution of Brownian particles in a fluid of light particles. Their statistical properties are analyzed and, in the light of the insights afforded, the thermodynamics of a single Brownian particle is revisited and the status of the second law of thermodynamics is discussed.

]]>Entropy doi: 10.3390/e19080433

Authors: Elisa Guelpa Vittorio Verda

Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of finding possible links between entropy generation and propagation directions. The ultimate goal of such analysis consists in helping one to overcome possible limitations of the models usually applied to the prediction of wildfire propagation. These models are based on the application of the superimposition of the effects due to wind and slope, which has proven to fail in various cases. The analysis presented here shows that entropy generation allows a detailed analysis of the landscape propagation of a fire and can be thus applied to its quantitative description.

]]>Entropy doi: 10.3390/e19080431

Authors: Shong-Loong Chen Chia-Pang Cheng

The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model were not considered. The objective of research was to analyze the stability of natural slope, in consideration of characteristics of rock layers and the variability of pre-stressing force. By sensitivity and uncertainty analysis, the result showed the sensitivity for pre-stressing force of rock anchor was significantly smaller than the cohesive (c) of rock layer and the varying influence of the friction angle (ϕ) in rock layers. In addition, the immersion by water at the natural slope would weaken the rock layers, in which the cohesion c was reduced to 6 kPa and the friction angle ϕ was decreased below 14°, and it started to show instability and failure in the balance as FS became smaller than 1. The failure rate to the slope could be as high as 50%. By stabilizing with a rock anchor, the failure rate could be reduced below 3%, greatly improving the stability and the reliability of the slope.

]]>Entropy doi: 10.3390/e19080432

Authors: Yanyan Wang Yingsong Li Felix Albu Rui Yang

A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties on the two grouped channel coefficients to improve the channel estimation performance in a mixed noise environment. As a result, a zero attraction term is obtained from the expected l 0 and l 1 penalty techniques. Furthermore, a reweighting factor is adopted and incorporated into the zero-attraction term of the GC-MCC algorithm which is denoted as the reweighted GC-MCC (RGC-MMC) algorithm to enhance the estimation performance. Both the GC-MCC and RGC-MCC algorithms are developed to exploit well the inherent sparseness properties of the sparse multi-path channels due to the expected zero-attraction terms in their iterations. The channel estimation behaviors are discussed and analyzed over sparse channels in mixed Gaussian noise environments. The computer simulation results show that the estimated steady-state error is smaller and the convergence is faster than those of the previously reported MCC and sparse MCC algorithms.

]]>Entropy doi: 10.3390/e19080427

Authors: Badr Albanna Christopher Hillar Jascha Sohl-Dickstein Michael DeWeese

Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower bounds on the entropy for the minimum entropy distribution over arbitrarily large collections of binary units with any fixed set of mean values and pairwise correlations. We also construct specific low-entropy distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size for any set of first- and second-order statistics consistent with arbitrarily large systems. We further demonstrate that some sets of these low-order statistics can only be realized by small systems. Our results show how only small amounts of randomness are needed to mimic low-order statistical properties of highly entropic distributions, and we discuss some applications for engineered and biological information transmission systems.

]]>Entropy doi: 10.3390/e19080429

Authors: Dagmar Markechová Beloslav Riečan

In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition, we define the logical entropy of an IF-dynamical system. It is shown that the logical entropy of IF-dynamical systems is invariant under isomorphism. Finally, an analogy of the Kolmogorov–Sinai theorem on generators for IF-dynamical systems is proved.

]]>Entropy doi: 10.3390/e19080428

Authors: Yen-Ju Chu Chi-Feng Chang Jiann-Shing Shieh Wang-Tso Lee

Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity is a powerful tool in investigating the underlying disturbances of neural networks of the brain. Most neurological and neuropsychiatric disorders in childhood affect the early stage of brain development. The analysis of EEG complexity may show the influences of different neurological and neuropsychiatric disorders on different regions of the brain during development. This article aims to give a brief summary of current concepts of MSE analysis in pediatric neurological and neuropsychiatric disorders. Studies utilizing MSE or its modifications for investigating neurological and neuropsychiatric disorders in children were reviewed. Abnormal EEG complexity was shown in a variety of childhood neurological and neuropsychiatric diseases, including autism, attention deficit/hyperactivity disorder, Tourette syndrome, and epilepsy in infancy and childhood. MSE has been shown to be a powerful method for analyzing the non-linear anomaly of EEG in childhood neurological diseases. Further studies are needed to show its clinical implications on diagnosis, treatment, and outcome prediction.

]]>Entropy doi: 10.3390/e19080426

Authors: Liang-Fang Ni Yi Wang Wei-Xia Li Pei-Zhen Wang Jia-Yan Zhang

An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive interference cancellation, stemmed from the a priori log-likelihood ratio (LLR) of encoded bits. Then, the signal originating from the symbols of the reliable segment, the symbol reliability metric, in terms of an a posteriori LLR of encoded bits which is larger than a certain threshold, is iteratively cancelled with the QRSFSIC in order to further obtain the residual signal for evaluating the symbols in the unreliable segment. This is done until the unreliable segment is empty, resulting in the extrinsic information for a RM turbo-coded bit with the greatest likelihood. Bridged by de-multiplexing and multiplexing, an iterative QRSFSIC detection is concatenated with an iterative trellis-based maximum a posteriori probability RM turbo decoder as if a principal Turbo detection and decoder is embedded with an iterative subordinate QRSFSIC detection and a RM turbo decoder, exchanging each other’s detection and decoding soft-decision information iteratively. These three stages let the proposed algorithm approach the upper bound of the diversity. The simulation results also show that the proposed scheme outperforms the other suboptimum detectors considered in this paper.

]]>Entropy doi: 10.3390/e19080425

Authors: Song Xu Yang Li Tingwen Huang Rosa Chan

Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying neural dynamics from multiple spike train data. First, the significant inputs are selected by using a group least absolute shrinkage and selection operator (LASSO) method, which can capture the sparsity within the neural system. Second, the multiwavelet-based basis function expansion scheme with an efficient forward orthogonal regression (FOR) algorithm aided by mutual information is utilized to rapidly capture the time-varying characteristics from the sparse model. Quantitative simulation results demonstrate that the proposed sMGLV model in this paper outperforms the initial full model and the state-of-the-art modeling methods in tracking performance for various time-varying kernels. Analyses of experimental data show that the proposed sMGLV model can capture the timing of transient changes accurately. The proposed framework will be useful to the study of how, when, and where information transmission processes across brain regions evolve in behavior.

]]>Entropy doi: 10.3390/e19080424

Authors: Jiarong Shi Xiuyun Zheng Wei Yang

Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption that the data matrices are contaminated stochastically by some type of noise. Thus the point estimations of low-rank components can be obtained by Maximum Likelihood (ML) estimation or Maximum a posteriori (MAP). In the past decade, a variety of probabilistic models of low-rank matrix factorizations have emerged. The most significant difference between low-rank matrix factorizations and their corresponding probabilistic models is that the latter treat the low-rank components as random variables. This paper makes a survey of the probabilistic models of low-rank matrix factorizations. Firstly, we review some probability distributions commonly-used in probabilistic models of low-rank matrix factorizations and introduce the conjugate priors of some probability distributions to simplify the Bayesian inference. Then we provide two main inference methods for probabilistic low-rank matrix factorizations, i.e., Gibbs sampling and variational Bayesian inference. Next, we classify roughly the important probabilistic models of low-rank matrix factorizations into several categories and review them respectively. The categories are performed via different matrix factorizations formulations, which mainly include PCA, matrix factorizations, robust PCA, NMF and tensor factorizations. Finally, we discuss the research issues needed to be studied in the future.

]]>Entropy doi: 10.3390/e19080423

Authors: Shengwei Huang Chengzhou Li Tianyu Tan Peng Fu Gang Xu Yongping Yang

In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas is not only used to preheat air for assisting coal combustion as usual but also to heat up feedwater and for low-pressure steam extraction. Air preheating is performed by both the exhaust flue gas in the boiler island and the low-pressure steam extraction in the turbine island; thereby part of the flue gas heat originally exchanged in the air preheater can be saved and introduced to heat the feedwater and the high-temperature condensed water. Consequently, part of the high-pressure steam is saved for further expansion in the steam turbine, which results in additional net power output. Based on the design data of a typical 1000 MW ultra-supercritical coal-fired power plant in China, an in-depth analysis of the energy-saving characteristics of the improved waste heat utilization system (WHUS) and the conventional WHUS is conducted. When the improved WHUS is adopted in a typical 1000 MW unit, net power output increases by 19.51 MW, exergy efficiency improves to 45.46%, and net annual revenue reaches USD 4.741 million while for the conventional WHUS, these performance parameters are 5.83 MW, 44.80% and USD 1.244 million, respectively. The research described in this paper provides a feasible energy-saving option for coal-fired power plants.

]]>Entropy doi: 10.3390/e19080419

Authors: Jiawen Deng Juan Jaramillo Peter Hänggi Jiangbin Gong

The well-known Jarzynski equality, often written in the form e − β Δ F = 〈 e − β W 〉 , provides a non-equilibrium means to measure the free energy difference Δ F of a system at the same inverse temperature β based on an ensemble average of non-equilibrium work W. The accuracy of Jarzynski’s measurement scheme was known to be determined by the variance of exponential work, denoted as var e − β W . However, it was recently found that var e − β W can systematically diverge in both classical and quantum cases. Such divergence will necessarily pose a challenge in the applications of Jarzynski equality because it may dramatically reduce the efficiency in determining Δ F . In this work, we present a deformed Jarzynski equality for both classical and quantum non-equilibrium statistics, in efforts to reuse experimental data that already suffers from a diverging var e − β W . The main feature of our deformed Jarzynski equality is that it connects free energies at different temperatures and it may still work efficiently subject to a diverging var e − β W . The conditions for applying our deformed Jarzynski equality may be met in experimental and computational situations. If so, then there is no need to redesign experimental or simulation methods. Furthermore, using the deformed Jarzynski equality, we exemplify the distinct behaviors of classical and quantum work fluctuations for the case of a time-dependent driven harmonic oscillator dynamics and provide insights into the essential performance differences between classical and quantum Jarzynski equalities.

]]>Entropy doi: 10.3390/e19080420

Authors: Junqing Zhang Trung Duong Roger Woods Alan Marshall

The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered.

]]>Entropy doi: 10.3390/e19080422

Authors: Brendon Brewer

The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple Gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario.

]]>Entropy doi: 10.3390/e19080421

Authors: Qing Li Steven Liang

The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and always corrupted by heavy background noise. In this paper, a new transient impulse extraction methodology is proposed based on impulse-step dictionary and re-weighted minimizing nonconvex penalty Lq regular (R-WMNPLq, q = 0.5) for the incipient fault diagnosis of rolling bearings. Prior to the sparse representation, the original vibration signal is preprocessed by the variational mode decomposition (VMD) technique. Due to the physical mechanism of periodic double impacts, including step-like and impulse-like impacts, an impulse-step impact dictionary atom could be designed to match the natural waveform structure of vibration signals. On the other hand, the traditional sparse reconstruction approaches such as orthogonal matching pursuit (OMP), L1-norm regularization treat all vibration signal values equally and thus ignore the fact that the vibration peak value may have more useful information about periodical transient impulses and should be preserved at a larger weight value. Therefore, penalty and smoothing parameters are introduced on the reconstructed model to guarantee the reasonable distribution consistence of peak vibration values. Lastly, the proposed technique is applied to accelerated lifetime testing of rolling bearings, where it achieves a more noticeable and higher diagnostic accuracy compared with OMP, L1-norm regularization and traditional spectral Kurtogram (SK) method.

]]>Entropy doi: 10.3390/e19080279

Authors: Arshad Khan Kashif Ali Abro Asifa Tassaddiq Ilyas Khan

This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F ( ∂ β / ∂ t β ) and Atangana–Baleanu (AB) A B ( ∂ α / ∂ t α ) fractional derivatives. For this purpose, second grade fluids flow with combined gradients of mass concentration and temperature distribution over a vertical flat plate is considered. The problem is first written in non-dimensional form and then based on AB and CF fractional derivatives, it is developed in fractional form, and then using the Laplace transform technique, exact solutions are established for both cases of AB and CF derivatives. They are then expressed in terms of newly defined M-function M q p ( z ) and generalized Hyper-geometric function p Ψ q ( z ) . The obtained exact solutions are plotted graphically for several pertinent parameters and an interesting comparison is made between AB and CF derivatives results with various similarities and differences.

]]>Entropy doi: 10.3390/e19080410

Authors: Sufen Wang Vijay Singh

The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced spatial and temporal variation of soil water. During a study, SWC was measured under maize and wheat for two years in northwest China. Statistical methods and entropy analysis were applied to investigate the spatio-temporal variability of SWC and the interaction between SWC and its influencing factors. The SWC variability changed within the field plot, with the standard deviation reaching a maximum value under intermediate mean SWC in different layers under various conditions (climatic conditions, soil conditions, crop type conditions). The spatial-temporal-distribution of the SWC reflects the variability of precipitation and potential evapotranspiration (ET0) under different crop covers. The mutual entropy values between SWC and precipitation were similar in two years under wheat cover but were different under maize cover. However, the mutual entropy values at different depths were different under different crop covers. The entropy values changed with SWC following an exponential trend. The informational correlation coefficient (R0) between the SWC and the precipitation was higher than that between SWC and other factors at different soil depths. Precipitation was the dominant factor controlling the SWC variability, and the crop efficient was the second dominant factor. This study highlights the precipitation is a paramount factor for investigating the spatio-temporal variability of soil water content in Northwest China.

]]>Entropy doi: 10.3390/e19080418

Authors: Stefano A. Lollai

In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs) and systems theories. A measure of entropy is proposed for QMSs, including a virtual document entropy and an entropy linked to processes and organisation. QMSs are also interpreted in light of Cybernetics, and interrelations between Information Theory and quality are also highlighted. A measure for the information content of quality documents is proposed. Such parameters can be used as adequacy indices for QMSs. From the discussed approach, suggestions for organising QMSs are also derived. Further interpretive thermodynamic-based criteria for QMSs are also proposed. The work represents the first attempt to treat quality organisational systems according to a thermodynamics-related approach. At this stage, no data are available to compare statements in the paper.

]]>Entropy doi: 10.3390/e19080416

Authors: Gokmen Demirkaya Ricardo Padilla Armando Fontalvo Maree Lake Yee Lim

This paper presents a theoretical investigation of a combined Power and Cooling Cycle that employs an Ammonia-Water mixture. The cycle combines a Rankine and an absorption refrigeration cycle. The Goswami cycle can be used in a wide range of applications including recovering waste heat as a bottoming cycle or generating power from non-conventional sources like solar radiation or geothermal energy. A thermodynamic study of power and cooling co-generation is presented for heat source temperatures between 100 to 350 °C. A comprehensive analysis of the effect of several operation and configuration parameters, including the number of turbine stages and different superheating configurations, on the power output and the thermal and exergy efficiencies was conducted. Results showed the Goswami cycle can operate at an effective exergy efficiency of 60–80% with thermal efficiencies between 25 to 31%. The investigation also showed that multiple stage turbines had a better performance than single stage turbines when heat source temperatures remain above 200 °C in terms of power, thermal and exergy efficiencies. However, the effect of turbine stages is almost the same when heat source temperatures were below 175 °C. For multiple turbine stages, the use of partial superheating with Single or Double Reheat stream showed a better performance in terms of efficiency. It also showed an increase in exergy destruction when heat source temperature was increased.

]]>Entropy doi: 10.3390/e19080417

Authors: Arthur Sousa Hideki Takayasu Didier Sornette Misako Takayasu

We introduce a simple growth model in which the sizes of entities evolve as multiplicative random processes that start at different times. A novel aspect we examine is the dependence among entities. For this, we consider three classes of dependence between growth factors governing the evolution of sizes: independence, Kesten dependence and mixed dependence. We take the sum X of the sizes of the entities as the representative quantity of the system, which has the structure of a sum of product terms (Sigma-Pi), whose asymptotic distribution function has a power-law tail behavior. We present evidence that the dependence type does not alter the asymptotic power-law tail behavior, nor the value of the tail exponent. However, the structure of the large values of the sum X is found to vary with the dependence between the growth factors (and thus the entities). In particular, for the independence case, we find that the large values of X are contributed by a single maximum size entity: the asymptotic power-law tail is the result of such single contribution to the sum, with this maximum contributing entity changing stochastically with time and with realizations.

]]>Entropy doi: 10.3390/e19080413

Authors: Adrian Arellano-Delgado Rosa López-Gutiérrez Miguel Murillo-Escobar Liliana Cardoza-Avendaño César Cruz-Hernández

In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among others. Nevertheless, the emergence of hyperchaos in this kind of real-life network has not been proven. In particular, we focus this study on the emergence of hyperchaotic dynamics, considering that these can be mainly used in engineering applications such as cryptography, secure communications, biometric systems, telemedicine, among others. In order to corroborate that the emerging dynamics are hyperchaotic, some chaos and hyperchaos verification tests are conducted. In addition, the presented hyperchaotic coupled system synchronizes, based on the proposed coupling scheme.

]]>Entropy doi: 10.3390/e19080415

Authors: Arkadiusz Jędrzejewski Katarzyna Sznajd-Weron

We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person approach with the quenched disorder and the situation approach with the annealed disorder, and investigate how these two approaches influence order–disorder phase transitions observed in the q-voter model with noise. We show that under a quenched disorder, differences between models with independence and anticonformity are weaker and only quantitative. In contrast, annealing has a much more profound impact on the system and leads to qualitative differences between models on a macroscopic level. Furthermore, only under an annealed disorder may the discontinuous phase transitions appear. It seems that freezing the agents’ behavior at the beginning of simulation—introducing quenched disorder—supports second-order phase transitions, whereas allowing agents to reverse their attitude in time—incorporating annealed disorder—supports discontinuous ones. We show that anticonformity is insensitive to the type of disorder, and in all cases it gives the same result. We precede our study with a short insight from statistical physics into annealed vs. quenched disorder and a brief review of these two approaches in models of opinion dynamics.

]]>Entropy doi: 10.3390/e19080414

Authors: Mohammad Mehdi Rashidi Munawwar Ali Abbas

This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum boundary layers. For this purpose, we have considered a model for effective Prandtl number which is borrowed by means of experimental analysis on a nano boundary layer, steady, two-dimensional incompressible flow through a stretching sheet. We have considered γAl2O3-H2O and Al2O3-C2H6O2 nanoparticles for the governing flow problem. An entropy generation analysis is also presented with the help of the second law of thermodynamics. A numerical technique known as Successive Taylor Series Linearization Method (STSLM) is used to solve the obtained governing nonlinear boundary layer equations. The numerical and graphical results are discussed for two cases i.e., (i) effective Prandtl number and (ii) without effective Prandtl number. From graphical results, it is observed that the velocity profile and temperature profile increases in the absence of effective Prandtl number while both expressions become larger in the presence of Prandtl number. Further, numerical comparison has been presented with previously published results to validate the current methodology and results.

]]>Entropy doi: 10.3390/e19080411

Authors: Vihan Patel Charles Lineweaver

The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem is proposed. Following Penrose, we argue that the evenly distributed matter of the early universe is equivalent to low gravitational entropy. There are two competing explanations for how this initial low gravitational entropy comes about. (1) Inflation and baryogenesis produce a virtually homogeneous distribution of matter with a low gravitational entropy. (2) Dissatisfied with explaining a low gravitational entropy as the product of a ‘special’ scalar field, some theorists argue (following Boltzmann) for a “more natural” initial condition in which the entire universe is in an initial equilibrium state of maximum entropy. In this equilibrium model, our observable universe is an unusual low entropy fluctuation embedded in a high entropy universe. The anthropic principle and the fluctuation theorem suggest that this low entropy region should be as small as possible and have as large an entropy as possible, consistent with our existence. However, our low entropy universe is much larger than needed to produce observers, and we see no evidence for an embedding in a higher entropy background. The initial conditions of inflationary models are as natural as the equilibrium background favored by many theorists.

]]>Entropy doi: 10.3390/e19080412

Authors: Eduard Gabriel Ceptureanu Sebastian Ion Ceptureanu Doina Popescu

This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate entrepreneurship as an instrument to fulfil their objectives. Our study contributes to the limited amount of empirical research on entropy in an organization setting by highlighting the boundary conditions of the impact by examining the moderating effect of firms’ organizational capabilities and also to the development of Econophysics as a fast growing area of interdisciplinary sciences.

]]>