Entropy doi: 10.3390/e20120965

Authors: Shumin Zheng Shaoqing Wang

The elastic properties of seventy different compositions were calculated to optimize the composition of a V&ndash;Mo&ndash;Nb&ndash;Ta&ndash;W system. A new model called maximum entropy approach (MaxEnt) was adopted. The influence of each element was discussed. Molybdenum (Mo) and tungsten (W) are key elements for the maintenance of elastic properties. The V&ndash;Mo&ndash;Nb&ndash;Ta&ndash;W system has relatively high values of C44, bulk modulus (B), shear modulus (G), and Young&rsquo;s modulus (E), with high concentrations of Mo + W. Element W is brittle and has high density. Thus, low-density Mo can substitute part of W. Vanadium (V) has low density and plays an important role in decreasing the brittleness of the V&ndash;Mo&ndash;Nb&ndash;Ta&ndash;W system. Niobium (Nb) and tantalum (Ta) have relatively small influence on elastic properties. Furthermore, the calculated results can be used as a general guidance for the selection of a V&ndash;Mo&ndash;Nb&ndash;Ta&ndash;W system.

]]>Entropy doi: 10.3390/e20120964

Authors: Wenke Zang Zehua Wang Dong Jiang Xiyu Liu Zhenni Jiang

As a non-invasive diagnostic tool, Magnetic Resonance Imaging (MRI) has been widely used in the field of brain imaging. The classification of MRI brain image conditions poses challenges both technically and clinically, as MRI is primarily used for soft tissue anatomy and can generate large amounts of detailed information about the brain conditions of a subject. To classify benign and malignant MRI brain images, we propose a new method. Discrete wavelet transform (DWT) is used to extract wavelet coefficients from MRI images. Then, Tsallis entropy with DNA genetic algorithm (DNA-GA) optimization parameters (called DNAGA-TE) was used to obtain entropy characteristics from DWT coefficients. At last, DNA-GA optimized support vector machine (called DNAGA-KSVM) with radial basis function (RBF) kernel, is applied as a classifier. In our experimental procedure, we use two kinds of images to validate the availability and effectiveness of the algorithm. One kind of data is the Simulated Brain Database and another kind of image is real MRI images which downloaded from Harvard Medical School website. Experimental results demonstrate that our method (DNAGA-TE+KSVM) obtained better classification accuracy.

]]>Entropy doi: 10.3390/e20120963

Authors: Arvinder Kaur Deepti Chopra

Fault prediction is an important research area that aids software development and the maintenance process. It is a field that has been continuously improving its approaches in order to reduce the fault resolution time and effort. With an aim to contribute towards building new approaches for fault prediction, this paper proposes Entropy Churn Metrics (ECM) based on History Complexity Metrics (HCM) and Churn of Source Code Metrics (CHU). The study also compares performance of ECM with that of HCM. The performance of both these metrics is compared for 14 subsystems of 5different software projects: Android, Eclipse, Apache Http Server, Eclipse C/C++ Development Tooling (CDT), and Mozilla Firefox. The study also analyses the software subsystems on three parameters: (i) distribution of faults, (ii) subsystem size, and (iii) programming language, to determine which characteristics of software systems make HCM or ECM more preferred over others.

]]>Entropy doi: 10.3390/e20120962

Authors: Amir Omidvarnia Mostefa Mesbah Mangor Pedersen Graeme Jackson

Approximate entropy (ApEn) and sample entropy (SampEn) are widely used for temporal complexity analysis of real-world phenomena. However, their relationship with the Hurst exponent as a measure of self-similarity is not widely studied. Additionally, ApEn and SampEn are susceptible to signal amplitude changes. A common practice for addressing this issue is to correct their input signal amplitude by its standard deviation. In this study, we first show, using simulations, that ApEn and SampEn are related to the Hurst exponent in their tolerance r and embedding dimension m parameters. We then propose a modification to ApEn and SampEn called range entropy or RangeEn. We show that RangeEn is more robust to nonstationary signal changes, and it has a more linear relationship with the Hurst exponent, compared to ApEn and SampEn. RangeEn is bounded in the tolerance r-plane between 0 (maximum entropy) and 1 (minimum entropy) and it has no need for signal amplitude correction. Finally, we demonstrate the clinical usefulness of signal entropy measures for characterisation of epileptic EEG data as a real-world example.

]]>Entropy doi: 10.3390/e20120961

Authors: Carlos Carrizales-Velazquez Adolfo Rudolf-Navarro Israel Reyes-Ramírez Alejandro Muñoz-Diosdado Lev Guzmán-Vargas Fernando Angulo-Brown

By using earthquake catalogs, previous studies have reported evidence that some changes in the spatial and temporal organization of earthquake activity are observed before and after of a main shock. These previous studies have used different approaches for detecting clustering behavior and distance-events density in order to point out the asymmetric behavior of foreshocks and aftershocks. Here, we present a statistical analysis of the seismic activity related to the M w = 8.2 earthquake that occurred on 7 September 2017 in Mexico. First, we calculated the inter-event time and distance between successive events for the period 1 January 1998 until 20 October 2017 in a circular region centered at the epicenter of the M w = 8.2 EQ. Next, we introduced the concept of pseudo-velocity as the ratio between the inter-event distance and inter-event time. A sliding window is considered to estimate some statistical features of the pseudo-velocity sequence before the main shock. Specifically, we applied the multifractal method to detect changes in the spectrum of singularities for the period before the main event on 7 September. Our results point out that the multifractality associated with the pseudo-velocities exhibits noticeable changes in the characteristics of the spectra (more narrower) for approximately three years, from 2013 until 2016, which is preceded and followed by periods with wider spectra. On the other hand, we present an analysis of patterns of seismic quiescence before the M w = 8.2 earthquake based on the Schreider algorithm over a period of 27 years. We report the existence of an important period of seismic quietude, for six to seven years, from 2008 to 2015 approximately, known as the alpha stage, and a beta stage of resumption of seismic activity, with a duration of approximately three years until the occurrence of the great earthquake of magnitude M w = 8.2 . Our results are in general concordance with previous results reported for statistics based on magnitude temporal sequences.

]]>Entropy doi: 10.3390/e20120960

Authors: Chengming Cao Jianxin Fu Tongwei Tong Yuxiao Hao Ping Gu Hai Hao Liangming Peng

The tensile creep behavior of an equiatomic CoCrFeNiMn high-entropy alloy was systematically investigated over an intermediate temperature range (500&ndash;600 &deg;C) and applied stress (140&ndash;400 MPa). The alloy exhibited a stress-dependent transition from a low-stress region (LSR-region I) to a high-stress region (HSR-region II). The LSR was characterized by a stress exponent of 5 to 6 and an average activation energy of 268 kJ mol&minus;1, whereas the HSR showed much higher corresponding values of 8.9&ndash;14 and 380 kJ mol&minus;1. Microstructural examinations on the deformed samples revealed remarkable dynamic recrystallization at higher stress levels. Dislocation jogging and tangling configurations were frequently observed in LSR and HSR at 550 and 600 &deg;C, respectively. Moreover, dynamic precipitates identified as M23C6 or a Cr-rich &sigma; phase were formed along grain boundaries in HSR. The diffusion-compensated strain rate versus modulus-compensated stress data analysis implied that the creep deformation in both stress regions was dominated by stress-assisted dislocation climb controlled by lattice diffusion. Nevertheless, the abnormally high stress exponents in HSR were ascribed to the coordinative contributions of dynamic recrystallization and dynamic precipitation. Simultaneously, the barriers imposed by these precipitates and severe initial deformation were referred to so as to increase the activation energy for creep deformation.

]]>Entropy doi: 10.3390/e20120959

Authors: Mateu Sbert Min Chen Jordi Poch Anton Bardera

Cross entropy and Kullback&ndash;Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied for optimization in machine learning. K-L divergence also stands independently as a commonly used metric for measuring the difference between two distributions. In this paper, we introduce new inequalities regarding cross entropy and K-L divergence by using the fact that cross entropy is the negated logarithm of the weighted geometric mean. We first apply the well-known rearrangement inequality, followed by a recent theorem on weighted Kolmogorov means, and, finally, we introduce a new theorem that directly applies to inequalities between K-L divergences. To illustrate our results, we show numerical examples of distributions.

]]>Entropy doi: 10.3390/e20120958

Authors: Yan-Fang Sang

Detecting the spatial heterogeneity in the potential occurrence probability of water disasters is a foremost and critical issue for the prevention and mitigation of water disasters. However, it is also a challenging task due to the lack of effective approaches. In the article, the entropy index was employed and those daily rainfall data at 520 stations were used to investigate the occurrences of rainstorms in China. Results indicated that the entropy results were mainly determined by statistical characters (mean value and standard deviation) of rainfall data, and can categorically describe the spatial heterogeneity in the occurrence of rainstorms by considering both their occurrence frequencies and magnitudes. Smaller entropy values mean that rainstorm events with bigger magnitudes were more likely to occur. Moreover, the spatial distribution of entropy values kept a good relationship with the hydroclimate conditions, described by the aridity index. In China, rainstorms are more to likely occur in the Pearl River basin, Southeast River basin, lower-reach of the Yangtze River basin, Huai River basin, and southwest corner of China. In summary, the entropy index can be an effective alternative for quantifying the potential occurrence probability of rainstorms. Four thresholds of entropy value were given to distinguish the occurrence frequency of rainstorms as five levels: very high, high, mid, low and very low, which can be a helpful reference for the study of daily rainstorms in other basins and regions.

]]>Entropy doi: 10.3390/e20120957

Authors: Kaya Demir Salih Ergün

This paper presents an analytical study on the use of deterministic chaos as an entropy source for the generation of random numbers. The chaotic signal generated by a phase-locked loop (PLL) device is investigated using numerical simulations. Depending on the system parameters, the chaos originating from the PLL device can be either bounded or unbounded in the phase direction. Bounded and unbounded chaos differs in terms of the flatness of the power spectrum associated with the chaotic signal. Random bits are generated by regular sampling of the signal from bounded and unbounded chaos. A white Gaussian noise source is also sampled regularly to generate random bits. By varying the sampling frequency, and based on the autocorrelation and the approximate entropy analysis of the resulting bit sequences, a comparison is made between bounded chaos, unbounded chaos and Gaussian white noise as an entropy source for random number generators.

]]>Entropy doi: 10.3390/e20120956

Authors: Edward Bormashenko Mark Frenkel Alla Vilk Irina Legchenkova Alexander A. Fedorets Nurken E. Aktaev Leonid A. Dombrovsky Michael Nosonovsky

The Voronoi entropy is a mathematical tool for quantitative characterization of the orderliness of points distributed on a surface. The tool is useful to study various surface self-assembly processes. We provide the historical background, from Kepler and Descartes to our days, and discuss topological properties of the Voronoi tessellation, upon which the entropy concept is based, and its scaling properties, known as the Lewis and Aboav&ndash;Weaire laws. The Voronoi entropy has been successfully applied to recently discovered self-assembled structures, such as patterned microporous polymer surfaces obtained by the breath figure method and levitating ordered water microdroplet clusters.

]]>Entropy doi: 10.3390/e20120955

Authors: Yuefeng Wu Giles Hooker

In frequentist inference, minimizing the Hellinger distance between a kernel density estimate and a parametric family produces estimators that are both robust to outliers and statistically efficient when the parametric family contains the data-generating distribution. This paper seeks to extend these results to the use of nonparametric Bayesian density estimators within disparity methods. We propose two estimators: one replaces the kernel density estimator with the expected posterior density using a random histogram prior; the other transforms the posterior over densities into a posterior over parameters through minimizing the Hellinger distance for each density. We show that it is possible to adapt the mathematical machinery of efficient influence functions from semiparametric models to demonstrate that both our estimators are efficient in the sense of achieving the Cram&eacute;r-Rao lower bound. We further demonstrate a Bernstein-von-Mises result for our second estimator, indicating that its posterior is asymptotically Gaussian. In addition, the robustness properties of classical minimum Hellinger distance estimators continue to hold.

]]>Entropy doi: 10.3390/e20120954

Authors: Moriah Echlin Boris Aguilar Max Notarangelo David L. Gibbs Ilya Shmulevich

Reservoir computers (RCs) are biology-inspired computational frameworks for signal processing that are typically implemented using recurrent neural networks. Recent work has shown that Boolean networks (BN) can also be used as reservoirs. We analyze the performance of BN RCs, measuring their flexibility and identifying the factors that determine the effective approximation of Boolean functions applied in a sliding-window fashion over a binary signal, both non-recursively and recursively. We train and test BN RCs of different sizes, signal connectivity, and in-degree to approximate three-bit, five-bit, and three-bit recursive binary functions, respectively. We analyze how BN RC parameters and function average sensitivity, which is a measure of function smoothness, affect approximation accuracy as well as the spread of accuracies for a single reservoir. We found that approximation accuracy and reservoir flexibility are highly dependent on RC parameters. Overall, our results indicate that not all reservoirs are equally flexible, and RC instantiation and training can be more efficient if this is taken into account. The optimum range of RC parameters opens up an angle of exploration for understanding how biological systems might be tuned to balance system restraints with processing capacity.

]]>Entropy doi: 10.3390/e20120953

Authors: Camelia Stanciu Michel Feidt Monica Costea Dorin Stanciu

Several optimization models of irreversible reverse cycle machines have been developed based on different optimization criteria in the literature, most of them using linear heat transfer laws at the source and sink. This raises the issue how close to actual operation conditions they are, since the heat transfer law on the phase-change processes is dependent on &Delta;T3. This paper addresses this issue by proposing a general model for study and optimization of thermal machines with two heat reservoirs applied to a Carnot-like refrigerator, with non-linear heat transfer laws and internal and external irreversibility. The optimization was performed using First and Second Law of Thermodynamics and the Lagrange multipliers method. Thus, several constraints were imposed to the system, also different objective functions were considered, allowing finding the optimum operating conditions, as well as the limited variation ranges of the system parameters. Results show that the nature of the heat transfer laws affects the optimum values of system parameters for obtaining maximum performances and also their magnitude. Sensitivity studies with respect to system several parameters are presented. The results contribute to the understanding of the system limits in operation under different constraints and allow choosing the most convenient variables in given circumstances.

]]>Entropy doi: 10.3390/e20120952

Authors: Dae-Young Lee Young-Seok Choi

Electrocardiogram (ECG) signal has been commonly used to analyze the complexity of heart rate variability (HRV). For this, various entropy methods have been considerably of interest. The multiscale entropy (MSE) method, which makes use of the sample entropy (SampEn) calculation of coarse-grained time series, has attracted attention for analysis of HRV. However, the SampEn computation may fail to be defined when the length of a time series is not enough long. Recently, distribution entropy (DistEn) with improved stability for a short-term time series has been proposed. Here, we propose a novel multiscale DistEn (MDE) for analysis of the complexity of short-term HRV by utilizing a moving-averaging multiscale process and the DistEn computation of each moving-averaged time series. Thus, it provides an improved stability of entropy evaluation for short-term HRV extracted from ECG. To verify the performance of MDE, we employ the analysis of synthetic signals and confirm the superiority of MDE over MSE. Then, we evaluate the complexity of short-term HRV extracted from ECG signals of congestive heart failure (CHF) patients and healthy subjects. The experimental results exhibit that MDE is capable of quantifying the decreased complexity of HRV with aging and CHF disease with short-term HRV time series.

]]>Entropy doi: 10.3390/e20120951

Authors: Weiran Zhang Peter K. Liaw Yong Zhang

The microstructure, Vickers hardness, and compressive properties of novel low-activation VCrFeTaxWx (x = 0.1, 0.2, 0.3, 0.4, and 1) high-entropy alloys (HEAs) were studied. The alloys were fabricated by vacuum-arc melting and the characteristics of these alloys were explored. The microstructures of all the alloys exhibited a typical morphology of dendritic and eutectic structures. The VCrFeTa0.1W0.1 and VCrFeTa0.2W0.2 alloys are essentially single phase, consisting of a disordered body-centered-cubic (BCC) phase, whereas the VCrFeTa0.2W0.2 alloy contains fine, nanoscale precipitates distributed in the BCC matrix. The lattice parameters and compositions of the identified phases were investigated. The alloys have Vickers hardness values ranging from 546 HV0.2 to 1135 HV0.2 with the x ranging from 0.1 to 1, respectively. The VCrFeTa0.1W0.1 and VCrFeTa0.2W0.2 alloys exhibit compressive yield strengths of 1341 MPa and 1742 MPa, with compressive plastic strains of 42.2% and 35.7%, respectively. VCrFeTa0.1W0.1 and VCrFeTa0.2W0.2 alloys have excellent hardness after annealing for 25 h at 600&ndash;1000 &deg;C, and presented compressive yield strength exceeding 1000 MPa with excellent heat-softening resistance at 600&ndash;800 &deg;C. By applying the HEA criteria, Ta and W additions into the VCrFeTaW are proposed as a family of candidate materials for fusion reactors and high-temperature structural applications.

]]>Entropy doi: 10.3390/e20120950

Authors: Jin Huang Van Butsic Weijun He Dagmawi Mulugeta Degefu Zaiyi Liao Min An

Establishing policies for controlling water pollution through discharge permits creates the basis for emission permit trading. Allocating wastewater discharge permits is a prerequisite to initiating the market. Past research has focused on designing schemes to allocate discharge permits efficiently, but these schemes have ignored differences among regions in terms of emission history. This is unfortunate, as fairness may dictate that areas that have been allowed to pollute in the past will receive fewer permits in the future. Furthermore, the spatial scales of previously proposed schemes are not practical. In this article, we proposed an information entropy improved proportional allocation method, which considers differences in GDP, population, water resources, and emission history at province spatial resolution as a new way to allocate waste water emission permits. The allocation of chemical oxygen demand (COD) among 30 provinces in China is used to illustrate the proposed discharge permit distribution mechanism. In addition, we compared the pollution distribution permits obtained from the proposed allocation scheme with allocation techniques that do not consider historical pollution and with the already established country plan. Our results showed that taking into account emission history as a factor when allocating wastewater discharge permits results in a fair distribution of economic benefits.

]]>Entropy doi: 10.3390/e20120949

Authors: Alberto Porta Roberto Maestri Vlasta Bari Beatrice De Maria Beatrice Cairo Emanuele Vaini Maria Teresa La Rovere Gian Domenico Pinna

Synergy and redundancy are concepts that suggest, respectively, adaptability and fault tolerance of systems with complex behavior. This study computes redundancy/synergy in bivariate systems formed by a target X and a driver Y according to the predictive information decomposition approach and partial information decomposition framework based on the minimal mutual information principle. The two approaches assess the redundancy/synergy of past of X and Y in reducing the uncertainty of the current state of X. The methods were applied to evaluate the interactions between heart and respiration in healthy young subjects (n = 19) during controlled breathing at 10, 15 and 20 breaths/minute and in two groups of chronic heart failure patients during paced respiration at 6 (n = 9) and 15 (n = 20) breaths/minutes from spontaneous beat-to-beat fluctuations of heart period and respiratory signal. Both methods suggested that slowing respiratory rate below the spontaneous frequency increases redundancy of cardiorespiratory control in both healthy and pathological groups, thus possibly improving fault tolerance of the cardiorespiratory control. The two methods provide markers complementary to respiratory sinus arrhythmia and the strength of the linear coupling between heart period variability and respiration in describing the physiology of the cardiorespiratory reflex suitable to be exploited in various pathophysiological settings.

]]>Entropy doi: 10.3390/e20120948

Authors: Huimin Hu Ke Xiong Yu Zhang Pingyi Fan Tong Liu Shaoli Kang

Wireless powered communication technology has a great potential to power low-power wireless sensor networks and Internet of Things (IoT) for real-time applications in future 5G networks, where age of information (AoI) plays a very important performance metric. This paper studies the system average AoI of a wireless powered network, where a wireless-powered user harvests energy from a wireless power source (WPS) and then transmits data packets to its access point (AP) by using the harvested energy. The user generates data packets with some probability and adopts the first-come-first-served (FCFS) service policy. For such a system, by using the queuing theory and the probability models, we derive a closed-form expression of the system average AoI. We also formulate an optimization problem to minimize the AoI by optimizing the data packet generating probability, and find its solution by simple calculation and search. Simulation results demonstrate the correctness of our obtained analytical results. It also shows that, when the total distance of the two hops is fixed, the system average AoI increases linearly with the increment of the distance of the first hop, and a smaller data packet generating probability should be selected to match a bigger first-hop distance for achieving a smaller system average AoI. Moreover, a smaller data packet size also contributes to a smaller system average AoI.

]]>Entropy doi: 10.3390/e20120947

Authors: Tao Lu Jiaming Wang Huabing Zhou Junjun Jiang Jiayi Ma Zhongyuan Wang

Image quality assessment (IQA) is a fundamental problem in image processing that aims to measure the objective quality of a distorted image. Traditional full-reference (FR) IQA methods use fixed-size sliding windows to obtain structure information but ignore the variable spatial configuration information. In order to better measure the multi-scale objects, we propose a novel IQA method, named RSEI, based on the perspective of the variable receptive field and information entropy. First, we find that consistence relationship exists between the information fidelity and human visual of individuals. Thus, we reproduce the human visual system (HVS) to semantically divide the image into multiple patches via rectangular-normalized superpixel segmentation. Then the weights of each image patches are adaptively calculated via their information volume. We verify the effectiveness of RSEI by applying it to data from the TID2008 database and denoise algorithms. Experiments show that RSEI outperforms some state-of-the-art IQA algorithms, including visual information fidelity (VIF) and weighted average deep image quality measure (WaDIQaM).

]]>Entropy doi: 10.3390/e20120946

Authors: Miloud Bessafi Dragutin T. Mihailović Slavica Malinović-Milićević Anja Mihailović Guillaume Jumaux François Bonnardot Yannick Fanchette Jean-Pierre Chabriat

Analysis of daily solar irradiation variability and predictability in space and time is important for energy resources planning, development, and management. The natural intermittency of solar irradiation is mainly triggered by atmospheric turbulent conditions, radiative transfer, optical properties of cloud and aerosol, moisture and atmospheric stability, orographic and thermal forcing, which introduce additional complexity into the phenomenological records. To address this question for daily solar irradiation data recorded during the period 2011&ndash;2015, at 32 stations measuring solar irradiance on La Reunion French tropical Indian Ocean Island, we use the tools of non-linear dynamics: the intermittency and chaos analysis, the largest Lyapunov exponent, Sample entropy, the Kolmogorov complexity and its derivatives (Kolmogorov complexity spectrum and its highest value), and spatial weighted Kolmogorov complexity combined with Hamming distance to assess complexity and corresponding predictability. Finally, we have clustered the Kolmogorov time (that quantifies the time span beyond which randomness significantly influences predictability) for daily cumulative solar irradiation for all stations. We show that under the record-breaking 2011&ndash;2012 La Nina event and preceding a very strong El-Nino 2015&ndash;2016 event, the predictability of daily incident solar energy over La R&eacute;union is affected.

]]>Entropy doi: 10.3390/e20120945

Authors: Jinhua Liu Shan Wu Xinye Xu

Conventional quantization-based watermarking may be easily estimated by averaging on a set of watermarked signals via uniform quantization approach. Moreover, the conventional quantization-based method neglects the visual perceptual characteristics of the host signal; thus, the perceptible distortions would be introduced in some parts of host signal. In this paper, inspired by the Watson&rsquo;s entropy masking model and logarithmic quantization index modulation (LQIM), a logarithmic quantization-based image watermarking method is developed by using the wavelet transform. Furthermore, the novel method improves the robustness of watermarking based on a logarithmic quantization strategy, which embeds the watermark data into the image blocks with high entropy value. The main significance of this work is that the trade-off between invisibility and robustness is simply addressed by using the logarithmic quantizaiton approach, which applies the entropy masking model and distortion-compensated scheme to develop a watermark embedding method. In this manner, the optimal quantization parameter obtained by minimizing the quantization distortion function effectively controls the watermark strength. In terms of watermark decoding, we model the wavelet coefficients of image by the generalized Gaussian distribution (GGD) and calculate the bit error probability of proposed method. Performance of the proposed method is analyzed and verified by simulation on real images. Experimental results demonstrate that the proposed method has the advantages of imperceptibility and strong robustness against attacks covering JPEG compression, additive white Gaussian noise (AWGN), Gaussian filtering, Salt&amp;Peppers noise, scaling and rotation attack, etc.

]]>Entropy doi: 10.3390/e20120944

Authors: Nannan Zhang Lifeng Wu Zhonghua Wang Yong Guan

Bearing plays an important role in mechanical equipment, and its remaining useful life (RUL) prediction is an important research topic of mechanical equipment. To accurately predict the RUL of bearing, this paper proposes a data-driven RUL prediction method. First, the statistical method is used to extract the features of the signal, and the root mean square (RMS) is regarded as the main performance degradation index. Second, the correlation coefficient is used to select the statistical characteristics that have high correlation with the RMS. Then, In order to avoid the fluctuation of the statistical feature, the improved Weibull distributions (WD) algorithm is used to fit the fluctuation feature of bearing at different recession stages, which is used as input of Naive Bayes (NB) training stage. During the testing stage, the true fluctuation feature of the bearings are used as the input of NB. After the NB testing, five classes are obtained: health states and four states for bearing degradation. Finally, the exponential smoothing algorithm is used to smooth the five classes, and to predict the RUL of bearing. The experimental results show that the proposed method is effective for RUL prediction of bearing.

]]>Entropy doi: 10.3390/e20120943

Authors: Muhammad Idrees Afridi Abderrahim Wakif Muhammad Qasim Abid Hussanan

The effects of variable thermal conductivity on heat transfer and entropy generation in a flow over a curved surface are investigated in the present study. In addition, the effects of energy dissipation and Ohmic heating are also incorporated in the modelling of the energy equation. Appropriate transformations are used to develop the self-similar equations from the governing equations of momentum and energy. The resulting self-similar equations are then solved by the Generalized Differential Quadrature Method (GDQM). For the validation and precision of the developed numerical solution, the resulting equations are also solved numerically using the Runge-Kutta-Fehlberg method (RKFM). An excellent agreement is found between the numerical results of the two methods. To examine the impacts of emerging physical parameters on velocity, temperature distribution and entropy generation, the numerical results are plotted against the various values of physical flow parameters and discussed physically in detail.

]]>Entropy doi: 10.3390/e20120942

Authors: Marcin Miłkowski

The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy efficiency, cost, reliability, and durability. Second, I critically analyze the notion of &ldquo;offloading&rdquo; computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment.

]]>Entropy doi: 10.3390/e20120941

Authors: Xinbo Liu Buhong Wang Zhixian Yang

To improve the low acceptance ratio and revenue to cost ratio caused by the poor match between the virtual nodes and the physical nodes in the existing virtual network embedding (VNE) algorithms, we established a multi-objective optimization integer linear programming model for the VNE problem, and proposed a novel two-stage virtual network embedding algorithm based on topology potential (VNE-TP). In the node embedding stage, the field theory once used for data clustering was introduced and a node embedding function designed to find the optimal physical node. In the link embedding stage, both the available bandwidth and hops of the candidate paths were considered, and a path embedding function designed to find the optimal path. Extensive simulation results show that the proposed algorithm outperforms other existing algorithms in terms of acceptance ratio and revenue to cost ratio.

]]>Entropy doi: 10.3390/e20120940

Authors: Evaldo M. F. Curado Fernando D. Nobre Angel Plastino

Events occurring with a frequency described by power laws, within a certain range of validity, are very common in natural systems. In many of them, it is possible to associate an energy spectrum and one can show that these types of phenomena are intimately related to Tsallis entropy S q . The relevant parameters become: (i) The entropic index q, which is directly related to the power of the corresponding distribution; (ii) The ground-state energy &epsilon; 0 , in terms of which all energies are rescaled. One verifies that the corresponding processes take place at a temperature T q with k T q &prop; &epsilon; 0 (i.e., isothermal processes, for a given q), in analogy with those in the class of self-organized criticality, which are known to occur at fixed temperatures. Typical examples are analyzed, like earthquakes, avalanches, and forest fires, and in some of them, the entropic index q and value of T q are estimated. The knowledge of the associated entropic form opens the possibility for a deeper understanding of such phenomena, particularly by using information theory and optimization procedures.

]]>Entropy doi: 10.3390/e20120939

Authors: Wolfgang Dreyer Clemens Guhlke Rüdiger Müller

We propose a modeling framework for magnetizable, polarizable, elastic, viscous, heat conducting, reactive mixtures in contact with interfaces. To this end, we first introduce bulk and surface balance equations that contain several constitutive quantities. For further modeling of the constitutive quantities, we formulate constitutive principles. They are based on an axiomatic introduction of the entropy principle and the postulation of Galilean symmetry. We apply the proposed formalism to derive constitutive relations in a rather abstract setting. For illustration of the developed procedure, we state an explicit isothermal material model for liquid electrolyte|metal electrode interfaces in terms of free energy densities in the bulk and on the surface. Finally, we give a survey of recent advancements in the understanding of electrochemical interfaces that were based on this model.

]]>Entropy doi: 10.3390/e20120938

Authors: Adam Brus Jiří Hrivnák Lenka Motlochová

Sixteen types of the discrete multivariate transforms, induced by the multivariate antisymmetric and symmetric sine functions, are explicitly developed. Provided by the discrete transforms, inherent interpolation methods are formulated. The four generated classes of the corresponding orthogonal polynomials generalize the formation of the Chebyshev polynomials of the second and fourth kinds. Continuous orthogonality relations of the polynomials together with the inherent weight functions are deduced. Sixteen cubature rules, including the four Gaussian, are produced by the related discrete transforms. For the three-dimensional case, interpolation tests, unitary transform matrices and recursive algorithms for calculation of the polynomials are presented.

]]>Entropy doi: 10.3390/e20120937

Authors: Shuying Chen Yang Tong Peter K. Liaw

Owing to the reduced defects, low cost, and high efficiency, the additive manufacturing (AM) technique has attracted increasingly attention and has been applied in high-entropy alloys (HEAs) in recent years. It was found that AM-processed HEAs possess an optimized microstructure and improved mechanical properties. However, no report has been proposed to review the application of the AM method in preparing bulk HEAs. Hence, it is necessary to introduce AM-processed HEAs in terms of applications, microstructures, mechanical properties, and challenges to provide readers with fundamental understanding. Specifically, we reviewed (1) the application of AM methods in the fabrication of HEAs and (2) the post-heat treatment effect on the microstructural evolution and mechanical properties. Compared with the casting counterparts, AM-HEAs were found to have a superior yield strength and ductility as a consequence of the fine microstructure formed during the rapid solidification in the fabrication process. The post-treatment, such as high isostatic pressing (HIP), can further enhance their properties by removing the existing fabrication defects and residual stress in the AM-HEAs. Furthermore, the mechanical properties can be tuned by either reducing the pre-heating temperature to hinder the phase partitioning or modifying the composition of the HEA to stabilize the solid-solution phase or ductile intermetallic phase in AM materials. Moreover, the processing parameters, fabrication orientation, and scanning method can be optimized to further improve the mechanical performance of the as-built-HEAs.

]]>Entropy doi: 10.3390/e20120936

Authors: Andrea Lamorgese Roberto Mauri

We discuss numerical results of diffusion-driven separation into three phases of a symmetric, three-component highly viscous liquid mixture after an instantaneous quench from the one-phase region into an unstable location within the tie triangle of its phase diagram. Our theoretical approach follows a diffuse-interface model of partially miscible ternary liquid mixtures that incorporates the one-parameter Margules correlation as a submodel for the enthalpic (so-called excess) component of the Gibbs energy of mixing, while its nonlocal part is represented based on a square-gradient (Cahn&ndash;Hilliard-type) modeling assumption. The governing equations for this phase-field ternary mixture model are simulated in 3D, showing the segregation kinetics in terms of basic segregation statistics, such as the integral scale of the pair-correlation function and the separation depth for each component. Based on the temporal evolution of the integral scales, phase separation takes place via the simultaneous growth of three phases up until a symmetry-breaking event after which one component continues to separate quickly, while phase separation for the other two seems to be delayed. However, inspection of the separation depths reveals that there can be no symmetry among the three components at any instant in time during a triphase segregation process.

]]>Entropy doi: 10.3390/e20120935

Authors: Yuanyuan Li Yanjing Sun Mingyao Zheng Xinghua Huang Guanqiu Qi Hexu Hu Zhiqin Zhu

Multi-exposure image fusion methods are often applied to the fusion of low-dynamic images that are taken from the same scene at different exposure levels. The fused images not only contain more color and detailed information, but also demonstrate the same real visual effects as the observation by the human eye. This paper proposes a novel multi-exposure image fusion (MEF) method based on adaptive patch structure. The proposed algorithm combines image cartoon-texture decomposition, image patch structure decomposition, and the structural similarity index to improve the local contrast of the image. Moreover, the proposed method can capture more detailed information of source images and produce more vivid high-dynamic-range (HDR) images. Specifically, image texture entropy values are used to evaluate image local information for adaptive selection of image patch size. The intermediate fused image is obtained by the proposed structure patch decomposition algorithm. Finally, the intermediate fused image is optimized by using the structural similarity index to obtain the final fused HDR image. The results of comparative experiments show that the proposed method can obtain high-quality HDR images with better visual effects and more detailed information.

]]>Entropy doi: 10.3390/e20120934

Authors: Vincenzo Bonnici Vincenzo Manca

In this paper, by extending some results of informational genomics, we present a new randomness test based on the empirical entropy of strings and some properties of the repeatability and unrepeatability of substrings of certain lengths. We give the theoretical motivations of our method and some experimental results of its application to a wide class of strings: decimal representations of real numbers, roulette outcomes, logistic maps, linear congruential generators, quantum measurements, natural language texts, and genomes. It will be evident that the evaluation of randomness resulting from our tests does not distinguish among the different sources of randomness (natural, or pseudo-casual).

]]>Entropy doi: 10.3390/e20120933

Authors: Oscar A. Negrete Patricio Vargas Francisco J. Peña Gonzalo Saravia Eugenio E. Vogel

In this paper, we revisit the q-state clock model for small systems. We present results for the thermodynamics of the q-state clock model for values from q = 2 to q = 20 for small square lattices of L &times; L , with L ranging from L = 3 to L = 64 with free-boundary conditions. Energy, specific heat, entropy, and magnetization were measured. We found that the Berezinskii&ndash;Kosterlitz&ndash;Thouless (BKT)-like transition appears for q &gt; 5, regardless of lattice size, while this transition at q = 5 is lost for L &lt; 10; for q &le; 4, the BKT transition is never present. We present the phase diagram in terms of q that shows the transition from the ferromagnetic (FM) to the paramagnetic (PM) phases at the critical temperature T 1 for small systems, and the transition changes such that it is from the FM to the BKT phase for larger systems, while a second phase transition between the BKT and the PM phases occurs at T 2. We also show that the magnetic phases are well characterized by the two-dimensional (2D) distribution of the magnetization values. We made use of this opportunity to carry out an information theory analysis of the time series obtained from Monte Carlo simulations. In particular, we calculated the phenomenological mutability and diversity functions. Diversity characterizes the phase transitions, but the phases are less detectable as q increases. Free boundary conditions were used to better mimic the reality of small systems (far from any thermodynamic limit). The role of size is discussed.

]]>Entropy doi: 10.3390/e20120932

Authors: Bin Pang Guiji Tang Chong Zhou Tian Tian

Rotor is a widely used and easily defected mechanical component. Thus, it is significant to develop effective techniques for rotor fault diagnosis. Fault signature extraction and state classification of the extracted signatures are two key steps for diagnosing rotor faults. To complete the accurate recognition of rotor states, a novel evaluation index named characteristic frequency band energy entropy (CFBEE) was proposed to extract the defective features of rotors, and support vector machine (SVM) was employed to automatically identify the rotor fault types. Specifically, the raw vibration signal of rotor was first analyzed by a joint time&ndash;frequency method based on improved singular spectrum decomposition (ISSD) and Hilbert transform (HT) to derive its time&ndash;frequency spectrum (TFS), which is named ISSD-HT TFS in this paper. Then, the CFBEE of the ISSD-HT TFS was calculated as the fault feature vector. Finally, SVM was used to complete the automatic identification of rotor faults. Simulated processing results indicate that ISSD improves the end effects of singular spectrum decomposition (SSD) and is superior to empirical mode decomposition (EMD) in extracting the sub-components of rotor vibration signal. The ISSD-HT TFS can more accurately reflect the time&ndash;frequency information compared to the EMD-HT TFS. Experimental verification demonstrates that the proposed method can accurately identify rotor defect types and outperform some other methods.

]]>Entropy doi: 10.3390/e20120931

Authors: Joshua Garland Tyler R. Jones Michael Neuder Valerie Morris James W. C. White Elizabeth Bradley

Permutation entropy techniques can be useful for identifying anomalies in paleoclimate data records, including noise, outliers, and post-processing issues. We demonstrate this using weighted and unweighted permutation entropy with water-isotope records containing data from a deep polar ice core. In one region of these isotope records, our previous calculations (See Garland et al. 2018) revealed an abrupt change in the complexity of the traces: specifically, in the amount of new information that appeared at every time step. We conjectured that this effect was due to noise introduced by an older laboratory instrument. In this paper, we validate that conjecture by reanalyzing a section of the ice core using a more advanced version of the laboratory instrument. The anomalous noise levels are absent from the permutation entropy traces of the new data. In other sections of the core, we show that permutation entropy techniques can be used to identify anomalies in the data that are not associated with climatic or glaciological processes, but rather effects occurring during field work, laboratory analysis, or data post-processing. These examples make it clear that permutation entropy is a useful forensic tool for identifying sections of data that require targeted reanalysis&mdash;and can even be useful for guiding that analysis.

]]>Entropy doi: 10.3390/e20120930

Authors: Muhammad Suleman Muhammad Ramzan Madiha Zulfiqar Muhammad Bilal Ahmad Shafee Jae Dong Chung Dianchen Lu Umer Farooq

The present study characterizes the flow of three-dimensional viscoelastic magnetohydrodynamic (MHD) nanofluids flow with entropy generation analysis past an exponentially permeable stretched surface with simultaneous impacts of chemical reaction and heat generation/absorption. The analysis was conducted with additional effects nonlinear thermal radiation and convective heat and mass boundary conditions. Apposite transformations were considered to transform the presented mathematical model to a system of differential equations. Analytical solutions of the proposed model were developed via a well-known homotopy analysis scheme. The numerically calculated values of the dimensionless drag coefficient, local Nusselt number, and mass transfer Nusselt number are presented, with physical insights. The graphs depicting the consequences of numerous parameters on involved distributions with requisite deliberations were also a part of this model. It is seen that the Bejan number is an increasing function of the thermal radiation parameter.

]]>Entropy doi: 10.3390/e20120929

Authors: Roberto Zivieri Nicola Pacini

The heat and matter transfer during glucose catabolism in living systems and their relation with entropy production are a challenging subject of the classical thermodynamics applied to biology. In this respect, an analogy between mechanics and thermodynamics has been performed via the definition of the entropy density acceleration expressed by the time derivative of the rate of entropy density and related to heat and matter transfer in minimum living systems. Cells are regarded as open thermodynamic systems that exchange heat and matter resulting from irreversible processes with the intercellular environment. Prigogine&rsquo;s minimum energy dissipation principle is reformulated using the notion of entropy density acceleration applied to glucose catabolism. It is shown that, for out-of-equilibrium states, the calculated entropy density acceleration for a single cell is finite and negative and approaches as a function of time a zero value at global thermodynamic equilibrium for heat and matter transfer independently of the cell type and the metabolic pathway. These results could be important for a deeper understanding of entropy generation and its correlation with heat transfer in cell biology with special regard to glucose catabolism representing the prototype of irreversible reactions and a crucial metabolic pathway in stem cells and cancer stem cells.

]]>Entropy doi: 10.3390/e20120928

Authors: Mladen Pavičić Norman D. Megill

Recently, quantum contextuality has been proved to be the source of quantum computation&rsquo;s power. That, together with multiple recent contextual experiments, prompts improving the methods of generation of contextual sets and finding their features. The most elaborated contextual sets, which offer blueprints for contextual experiments and computational gates, are the Kochen&ndash;Specker (KS) sets. In this paper, we show a method of vector generation that supersedes previous methods. It is implemented by means of algorithms and programs that generate hypergraphs embodying the Kochen&ndash;Specker property and that are designed to be carried out on supercomputers. We show that vector component generation of KS hypergraphs exhausts all possible vectors that can be constructed from chosen vector components, in contrast to previous studies that used incomplete lists of vectors and therefore missed a majority of hypergraphs. Consequently, this unified method is far more efficient for generations of KS sets and their implementation in quantum computation and quantum communication. Several new KS classes and their features have been found and are elaborated on in the paper. Greechie diagrams are discussed.

]]>Entropy doi: 10.3390/e20120927

Authors: Jianjun Jiang Jing Zhang Lijia Zhang Xiaomin Ran Jun Jiang Yifan Wu

Deep belief networks (DBNs) of deep learning technology have been successfully used in many fields. However, the structure of a DBN is difficult to design for different datasets. Hence, a DBN structure design algorithm based on information entropy and reconstruction error is proposed. Unlike previous algorithms, we innovatively combine network depth and node number and optimizes them simultaneously. First, the mathematical model of the structural design problem is established, and the boundary constraint for node number based on information entropy is derived by introducing the idea of information compression. Moreover, the optimization objective of the network performance based on reconstruction error is proposed by deriving the fact that network energy is proportional to reconstruction error. Finally, the improved simulated annealing (ISA) algorithm is used to adjust the DBN network layers and nodes simultaneously. Experiments were carried out on three public datasets (MNIST, Cifar-10 and Cifar-100). The results show that the proposed algorithm can design its proper structure to different datasets, yielding a trained DBN which has the lowest reconstruction error and prediction error rate. The proposed algorithm is shown to have the best performance compared with other algorithms and can be used to assist the setting of DBN structural parameters for different datasets.

]]>Entropy doi: 10.3390/e20120926

Authors: Keheng Zhu Liang Chen Xiong Hu

A new fault feature extraction method for rolling element bearing is put forward in this paper based on the adaptive local iterative filtering (ALIF) algorithm and the modified fuzzy entropy. Due to the bearing vibration signals&rsquo; non-stationary and nonlinear characteristics, the ALIF method, which is a new approach for the analysis of the non-stationary signals, is used to decompose the original vibration signals into a series of mode components. Fuzzy entropy (FuzzyEn) is a nonlinear dynamic parameter for measuring the signals&rsquo; complexity. However, it only emphasizes the signals&rsquo; local characteristics while neglecting its global fluctuation. Considering the global fluctuation of bearing vibration signals will change with the bearing working condition varying, we modified the FuzzyEn. The modified FuzzyEn (MFuzzyEn) of the first few modes obtained by the ALIF is utilized to form the fault feature vectors. Subsequently, the corresponding feature vectors are input into the multi-class SVM classifier to accomplish the bearing fault identification automatically. The experimental analysis demonstrates that the presented ALIF-MFuzzyEn-SVM approach can effectively recognize the different fault categories and different levels of bearing fault severity.

]]>Entropy doi: 10.3390/e20120925

Authors: Arjan van der Schaft Bernhard Maschke

Since the 1970s, contact geometry has been recognized as an appropriate framework for the geometric formulation of thermodynamic systems, and in particular their state properties. More recently it has been shown how the symplectization of contact manifolds provides a new vantage point; enabling, among other things, to switch easily between the energy and entropy representations of a thermodynamic system. In the present paper, this is continued towards the global geometric definition of a degenerate Riemannian metric on the homogeneous Lagrangian submanifold describing the state properties, which is overarching the locally-defined metrics of Weinhold and Ruppeiner. Next, a geometric formulation is given of non-equilibrium thermodynamic processes, in terms of Hamiltonian dynamics defined by Hamiltonian functions that are homogeneous of degree one in the co-extensive variables and zero on the homogeneous Lagrangian submanifold. The correspondence between objects in contact geometry and their homogeneous counterparts in symplectic geometry, is extended to the definition of port-thermodynamic systems and the formulation of interconnection ports. The resulting geometric framework is illustrated on a number of simple examples, already indicating its potential for analysis and control.

]]>Entropy doi: 10.3390/e20120924

Authors: Mingyang Zhang Wei Zhang Fangzhou Liu Yingbo Peng Songhao Hu Yong Liu

This study reports the results of the addition of diamonds in the sintering process of a FCC-structured CoCrFeNiMo high-entropy alloy. The effect of raw powder states such as elemental mixed (EM) powder, gas atomization (GA) powder and mechanical alloying (MA) powder on the uniformity of constituent phase was also investigated. Examination of microstructure and evaluation of mechanical properties of the composites depending on the mixing processes were performed. As a result, GA+MA powder composite showed the highest mechanical properties. The experimental results indicated that the powder manufacturing method was an essential parameter to determine the quality of HEA/diamond composites such as the uniformity of phase and binding behavior.

]]>Entropy doi: 10.3390/e20120923

Authors: Wenbing Chang Zhenzhong Xu Meng You Shenghan Zhou Yiyong Xiao Yang Cheng

The purpose of this paper is to predict failures based on textual sequence data. The current failure prediction is mainly based on structured data. However, there are many unstructured data in aircraft maintenance. The failure mentioned here refers to failure types, such as transmitter failure and signal failure, which are classified by the clustering algorithm based on the failure text. For the failure text, this paper uses the natural language processing technology. Firstly, segmentation and the removal of stop words for Chinese failure text data is performed. The study applies the word2vec moving distance model to obtain the failure occurrence sequence for failure texts collected in a fixed period of time. According to the distance, a clustering algorithm is used to obtain a typical number of fault types. Secondly, the failure occurrence sequence is mined using sequence mining algorithms, such as-PrefixSpan. Finally, the above failure sequence is used to train the Bayesian failure network model. The final experimental results show that the Bayesian failure network has higher accuracy for failure prediction.

]]>Entropy doi: 10.3390/e20120922

Authors: Souad Morsli Amina Sabeur Mohammed El Ganaoui Harry Ramenah

In this study, we examine the behavior of a propane diffusion flame with air in a burner; the computational investigations are achieved for each case employing the Fluent package. The graphs generated illustrate the influence of flow parameters, the effects of the oxygen percentage in the air, and the effects of the equivalence ratio &phi; on the entropy generation, the temperature gradients, and the Bejan number. The obtained results show that incorporation of hydrogen with propane reduced both temperature and carbon monoxide emission.

]]>Entropy doi: 10.3390/e20120921

Authors: Andrei Khrennikov Alexander Alodjants Anastasiia Trofimova Dmitry Tsarev

The recent years were characterized by increasing interest to applications of the quantum formalism outside physics, e.g., in psychology, decision-making, socio-political studies. To distinguish such approach from quantum physics, it is called quantum-like. It is applied to modeling socio-political processes on the basis of the social laser model describing stimulated amplification of social actions. The main aim of this paper is establishing the socio-psychological interpretations of the quantum notions playing the basic role in lasing modeling. By using the Copenhagen interpretation and the operational approach to the quantum formalism, we analyze the notion of the social energy. Quantum formalizations of such notions as a social atom, s-atom, and an information field are presented. The operational approach based on the creation and annihilation operators is used. We also introduce the notion of the social color of information excitations representing characteristics linked to lasing coherence of the type of collimation. The Bose&ndash;Einstein statistics of excitations is coupled with the bandwagon effect, one of the basic effects of social psychology. By using the operational interpretation of the social energy, we present the thermodynamical derivation of this quantum statistics. The crucial role of information overload generated by the modern mass-media is emphasized. In physics laser&rsquo;s resonator, the optical cavity, plays the crucial role in amplification. We model the functioning of social laser&rsquo;s resonator by &ldquo;distilling&rdquo; the physical scheme from connection with optics. As the mathematical basis, we use the master equation for the density operator for the quantum information field.

]]>Entropy doi: 10.3390/e20120920

Authors: Lv Zhang Yi

The characteristics of the early fault signal of the rolling bearing are weak and this leads to difficulties in feature extraction. In order to diagnose and identify the fault feature from the bearing vibration signal, an adaptive local iterative filter decomposition method based on permutation entropy is proposed in this paper. As a new time-frequency analysis method, the adaptive local iterative filtering overcomes two main problems of mode decomposition, comparing traditional methods: modal aliasing and the number of components is uncertain. However, there are still some problems in adaptive local iterative filtering, mainly the selection of threshold parameters and the number of components. In this paper, an improved adaptive local iterative filtering algorithm based on particle swarm optimization and permutation entropy is proposed. Firstly, particle swarm optimization is applied to select threshold parameters and the number of components in ALIF. Then, permutation entropy is used to evaluate the mode components we desire. In order to verify the effectiveness of the proposed method, the numerical simulation and experimental data of bearing failure are analyzed.

]]>Entropy doi: 10.3390/e20120919

Authors: María Martel-Escobar Francisco-José Vázquez-Polo Agustín Hernández-Bastida

Problems in statistical auditing are usually one&ndash;sided. In fact, the main interest for auditors is to determine the quantiles of the total amount of error, and then to compare these quantiles with a given materiality fixed by the auditor, so that the accounting statement can be accepted or rejected. Dollar unit sampling (DUS) is a useful procedure to collect sample information, whereby items are chosen with a probability proportional to book amounts and in which the relevant error amount distribution is the distribution of the taints weighted by the book value. The likelihood induced by DUS refers to a 201&ndash;variate parameter p but the prior information is in a subparameter &theta; linear function of p , representing the total amount of error. This means that partial prior information must be processed. In this paper, two main proposals are made: (1) to modify the likelihood, to make it compatible with prior information and thus obtain a Bayesian analysis for hypotheses to be tested; (2) to use a maximum entropy prior to incorporate limited auditor information. To achieve these goals, we obtain a modified likelihood function inspired by the induced likelihood described by Zehna (1966) and then adapt the Bayes&rsquo; theorem to this likelihood in order to derive a posterior distribution for &theta; . This approach shows that the DUS methodology can be justified as a natural method of processing partial prior information in auditing and that a Bayesian analysis can be performed even when prior information is only available for a subparameter of the model. Finally, some numerical examples are presented.

]]>Entropy doi: 10.3390/e20120918

Authors: Guohui Li Zhichao Yang Hong Yang

Noise reduction of underwater acoustic signals is of great significance in the fields of military and ocean exploration. Based on the adaptive decomposition characteristic of uniform phase empirical mode decomposition (UPEMD), a noise reduction method for underwater acoustic signals is proposed, which combines amplitude-aware permutation entropy (AAPE) and Pearson correlation coefficient (PCC). UPEMD is a recently proposed improved empirical mode decomposition (EMD) algorithm that alleviates the mode splitting and residual noise effects of EMD. AAPE is a tool to quantify the information content of nonlinear time series. Unlike permutation entropy (PE), AAPE can reflect the amplitude information on time series. Firstly, the original signal is decomposed into a series of intrinsic mode functions (IMFs) by UPEMD. The AAPE of each IMF is calculated. The modes are separated into high-frequency IMFs and low-frequency IMFs, and all low-frequency IMFs are determined as useful IMFs (UIMFs). Then, the PCC between the high-frequency IMF with the smallest AAPE and the original signal is calculated. If PCC is greater than the threshold, the IMF is also determined as a UIMF. Finally, all UIMFs are reconstructed and the denoised signal is obtained. Chaotic signals with different signal-to-noise ratios (SNRs) are used for denoising experiments. Compared with EMD and extreme-point symmetric mode decomposition (ESMD), the proposed method has higher SNR and smaller root mean square error (RMSE). The proposed method is applied to noise reduction of real underwater acoustic signals. The results show that the method can further eliminate noise and the chaotic attractors are smoother and clearer.

]]>Entropy doi: 10.3390/e20120917

Authors: Jun Wang Haoxue Yang Tong Guo Jiaxiang Wang William Yi Wang Jinshan Li

The solid state phase transformation kinetics of as-cast and cold rolling deformed Al0.5CoCrFeNi high-entropy alloys have been investigated by the thermal expansion method. The phase transformed volume fractions are determined from the thermal expansion curve using the lever rule method, and the deformed sample exhibits a much higher transformation rate. Two kinetic parameters, activation energy (E) and kinetic exponent (n) are determined using Kissinger&ndash; Akahira&ndash;Sunose (KAS) and Johnson&ndash;Mehl&ndash;Avrami (JMA) method, respectively. Results show that a pre-deformed sample shows a much lower activation energy and higher kinetic exponent compared with the as-cast sample, which are interpreted based on the deformation induced defects that can promote the nucleation and growth process during phase transformation.

]]>Entropy doi: 10.3390/e20120916

Authors: Mikołaj Morzy Tomasz Kajdanowicz

Graph energy is the energy of the matrix representation of the graph, where the energy of a matrix is the sum of singular values of the matrix. Depending on the definition of a matrix, one can contemplate graph energy, Randić energy, Laplacian energy, distance energy, and many others. Although theoretical properties of various graph energies have been investigated in the past in the areas of mathematics, chemistry, physics, or graph theory, these explorations have been limited to relatively small graphs representing chemical compounds or theoretical graph classes with strictly defined properties. In this paper we investigate the usefulness of the concept of graph energy in the context of large, complex networks. We show that when graph energies are applied to local egocentric networks, the values of these energies correlate strongly with vertex centrality measures. In particular, for some generative network models graph energies tend to correlate strongly with the betweenness and the eigencentrality of vertices. As the exact computation of these centrality measures is expensive and requires global processing of a network, our research opens the possibility of devising efficient algorithms for the estimation of these centrality measures based only on local information.

]]>Entropy doi: 10.3390/e20120915

Authors: Kaijin Huang Lin Chen Xin Lin Haisong Huang Shihao Tang Feilong Du

In order to improve the wear and corrosion resistance of an AZ91D magnesium alloy substrate, an Al0.5CoCrCuFeNi high-entropy alloy coating was successfully prepared on an AZ91D magnesium alloy surface by laser cladding using mixed elemental powders. Optical microscopy (OM), scanning electron microscopy (SEM), and X-ray diffraction were used to characterize the microstructure of the coating. The wear resistance and corrosion resistance of the coating were evaluated by dry sliding wear and potentiodynamic polarization curve test methods, respectively. The results show that the coating was composed of a simple FCC solid solution phase with a microhardness about 3.7 times higher than that of the AZ91D matrix and even higher than that of the same high-entropy alloy prepared by an arc melting method. The coating had better wear resistance than the AZ91D matrix, and the wear rate was about 2.5 times lower than that of the AZ91D matrix. Moreover, the main wear mechanisms of the coating and the AZ91D matrix were different. The former was abrasive wear and the latter was adhesive wear. The corrosion resistance of the coating was also better than that of the AZ91D matrix because the corrosion potential of the former was more positive and the corrosion current was smaller.

]]>Entropy doi: 10.3390/e20120914

Authors: Jonathan Maack Bruce Turkington

Nonequilibrium statistical models of point vortex systems are constructed using an optimal closure method, and these models are employed to approximate the relaxation toward equilibrium of systems governed by the two-dimensional Euler equations, as well as the quasi-geostrophic equations for either single-layer or two-layer flows. Optimal closure refers to a general method of reduction for Hamiltonian systems, in which macroscopic states are required to belong to a parametric family of distributions on phase space. In the case of point vortex ensembles, the macroscopic variables describe the spatially coarse-grained vorticity. Dynamical closure in terms of those macrostates is obtained by optimizing over paths in the parameter space of the reduced model subject to the constraints imposed by conserved quantities. This optimization minimizes a cost functional that quantifies the rate of information loss due to model reduction, meaning that an optimal path represents a macroscopic evolution that is most compatible with the microscopic dynamics in an information-theoretic sense. A near-equilibrium linearization of this method is used to derive dissipative equations for the low-order spatial moments of ensembles of point vortices in the plane. These severely reduced models describe the late-stage evolution of isolated coherent structures in two-dimensional and geostrophic turbulence. For single-layer dynamics, they approximate the relaxation of initially distorted structures toward axisymmetric equilibrium states. For two-layer dynamics, they predict the rate of energy transfer in baroclinically perturbed structures returning to stable barotropic states. Comparisons against direct numerical simulations of the fully-resolved many-vortex dynamics validate the predictive capacity of these reduced models.

]]>Entropy doi: 10.3390/e20120913

Authors: Irfan Younas Majid Khan

In this research article, we propose a new structure namely inverse left almost semigroup (LA-semigroup) by adding confusion in our proposed image encryption scheme along with discrete and continuous chaotic systems in order to complement the diffusion characteristics. The performance analysis was conducted in terms of the correlation analysis, the pixels uniformity analysis, the sensitivity analysis, the information entropy analysis, and pixels difference based measurements. The results show that the proposed algorithm has better information security properties, which can provide strong security and a high performance.

]]>Entropy doi: 10.3390/e20120912

Authors: Jean-Philippe Ansermet Sylvain D. Brechet

The Seebeck effect is derived within the thermodynamics of irreversible processes when the generalized forces contain the magnetic term M &nabla; B . This term appears in the formalism when the magnetic field is treated as a state variable. Two subsystems are considered, one representing atomic magnetic moments, and the other, mobile charges carrying a magnetic dipole moment. A magnetic contribution to the Seebeck coefficient is identified, proportional to the logarithmic derivative of the magnetization with respect to temperature. A brief review of experimental data on magneto-thermopower in magnetic metals illustrates this magnetic effect on thermally-driven charge transport.

]]>Entropy doi: 10.3390/e20120911

Authors: T. P. C. Klaver D. Simonovic M. H. F. Sluiter

We used the Thermo-Calc High Entropy Alloy CALPHAD database to determine the stable phases of AlCrMnNbTiV, AlCrMoNbTiV, AlCrFeTiV and AlCrMnMoTi alloys from 800 to 2800 K. The concentrations of elements were varied from 1&ndash;49 atom%. A five- or six-dimensional grid is constructed, with stable phases calculated at each grid point. Thermo-Calc was used as a massive parallel tool and three million compositions were calculated, resulting in tens of thousands of compositions for which the alloys formed a single disordered body centered cubic (bcc) phase at 800 K. By filtering out alloy compositions for which a disordered single phase persists down to 800 K, composition &lsquo;islands&rsquo; of high entropy alloys are determined in composition space. The sizes and shapes of such islands provide information about which element combinations have good high entropy alloy forming qualities as well as about the role of individual elements within an alloy. In most cases disordered single phases are formed most readily at low temperature when several elements are almost entirely excluded, resulting in essentially ternary alloys. We determined which compositions lie near the centers of the high entropy alloy islands and therefore remain high entropy islands under small composition changes. These island center compositions are predicted to be high entropy alloys with the greatest certainty and make good candidates for experimental verification. The search for high entropy islands can be conducted subject to constraints, e.g., requiring a minimum amount of Al and/or Cr to promote oxidation resistance. Imposing such constraints rapidly diminishes the number of high entropy alloy compositions, in some cases to zero. We find that AlCrMnNbTiV and AlCrMoNbTiV are relatively good high entropy alloy formers, AlCrFeTiV is a poor high entropy alloy former, while AlCrMnMoTi is a poor high entropy alloy former at 800 K but quickly becomes a better high entropy alloy former with increasing temperature.

]]>Entropy doi: 10.3390/e20120910

Authors: Bin Han Jie Wei Feng He Da Chen Zhi Jun Wang Alice Hu Wenzhong Zhou Ji Jung Kai

The partitioning of the alloying elements into the &gamma;&Prime; nanoparticles in a Ni2CoFeCrNb0.15 high entropy alloy was studied by the combination of atom probe tomography and first-principles calculations. The atom probe tomography results show that the Co, Fe, and Cr atoms incorporated into the Ni3Nb-type &gamma;&Prime; nanoparticles but their partitioning behaviors are significantly different. The Co element is much easier to partition into the &gamma;&Prime; nanoparticles than Fe and Cr elements. The first-principles calculations demonstrated that the different partitioning behaviors of Co, Fe and Cr elements into the &gamma;&Prime; nanoparticles resulted from the differences of their specific chemical potentials and bonding states in the &gamma;&Prime; phase.

]]>Entropy doi: 10.3390/e20120909

Authors: Luiz G. A. Alves Giuseppe Mangioni Francisco A. Rodrigues Pietro Panzarasa Yamir Moreno

The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss differences among the three network frameworks in terms of their relative advantages in capturing salient topological features of trade. We draw on the World Input-Output Database to build the three networks. We then uncover sources of heterogeneity in the way strength is allocated among countries and transactions by computing the strength distribution and entropy in each network. Additionally, we trace how entropy evolved, and show how the observed peaks can be associated with the onset of the global economic downturn. Findings suggest how more complex representations of trade, such as the multi-layer network, enable us to disambiguate the distinct roles of intra- and cross-industry transactions in driving the evolution of entropy at a more aggregate level. We discuss our results and the implications of our comparative analysis of networks for research on international trade and other empirical domains across the natural and social sciences.

]]>Entropy doi: 10.3390/e20120908

Authors: Wenrui Wang Jieqian Wang Honggang Yi Wu Qi Qing Peng

The present work investigates the influence of micro-alloyed Mo on the corrosion behavior of (CoCrFeNi)100&minus;xMox high-entropy alloys. All of the (CoCrFeNi)100&minus;xMox alloys exhibit a single face-centered cubic (FCC) solid solution. However, the (CoCrFeNi)97Mo3 alloy exhibits an ordered sigma (&sigma;) phase enriched in Cr and Mo. With the increase of x (the Mo content) from 1 to 3, the hardness of the (CoCrFeNi)100&minus;xMox alloys increases from 124.8 to 133.6 Vickers hardness (HV), and the compressive yield strength increases from 113.6 MPa to 141.1 MPa, without fracture under about a 60% compressive strain. The potentiodynamic polarization curve in a 3.5% NaCl solution indicates that the addition of Mo has a beneficial effect on the corrosion resistance to some certain extent, opposed to the &sigma; phase. Furthermore, the alloys tend to form a passivation film in the 0.5 M H2SO4 solution in order to inhibit the progress of the corrosion reaction as the Mo content increases.

]]>Entropy doi: 10.3390/e20120907

Authors: Alessandro Campa Lapo Casetti Ivan Latella Agustín Pérez-Madrid Stefano Ruffo

In nonadditive systems, like small systems or like long-range interacting systems even in the thermodynamic limit, ensemble inequivalence can be related to the occurrence of negative response functions, this in turn being connected with anomalous concavity properties of the thermodynamic potentials associated with the various ensembles. We show how the type and number of negative response functions depend on which of the quantities E, V and N (energy, volume and number of particles) are constrained in the ensemble. In particular, we consider the unconstrained ensemble in which E, V and N fluctuate, which is physically meaningful only for nonadditive systems. In fact, its partition function is associated with the replica energy, a thermodynamic function that identically vanishes when additivity holds, but that contains relevant information in nonadditive systems.

]]>Entropy doi: 10.3390/e20120906

Authors: Antonio M. Scarfone

A challenging frontier in physics concerns the study of complex and disordered systems. [...]

]]>Entropy doi: 10.3390/e20120905

Authors: V. María Barragán Kim R. Kristiansen Signe Kjelstrup

By thermoelectric power generation we mean the creation of electrical power directly from a temperature gradient. Semiconductors have been mainly used for this purpose, but these imply the use of rare and expensive materials. We show in this review that ion-exchange membranes may be interesting alternatives for thermoelectric energy conversion, giving Seebeck coefficients around 1 mV/K. Laboratory cells with Ag|AgCl electrodes can be used to find the transported entropies of the ions in the membrane without making assumptions. Non-equilibrium thermodynamics can be used to compute the Seebeck coefficient of this and other cells, in particular the popular cell with calomel electrodes. We review experimental results in the literature on cells with ion-exchange membranes, document the relatively large Seebeck coefficient, and explain with the help of theory its variation with electrode materials and electrolyte concentration and composition. The impact of the membrane heterogeneity and water content on the transported entropies is documented, and it is concluded that this and other properties should be further investigated, to better understand how all transport properties can serve the purpose of thermoelectric energy conversion.

]]>Entropy doi: 10.3390/e20120904

Authors: Lina Zhao Chengyu Liu Shoushui Wei Qin Shen Fan Zhou Jianqing Li

Entropy-based atrial fibrillation (AF) detectors have been applied for short-term electrocardiogram (ECG) analysis. However, existing methods suffer from several limitations. To enhance the performance of entropy-based AF detectors, we have developed a new entropy measure, named EntropyAF, which includes the following improvements: (1) use of a ranged function rather than the Chebyshev function to define vector distance, (2) use of a fuzzy function to determine vector similarity, (3) replacement of the probability estimation with density estimation for entropy calculation, (4) use of a flexible distance threshold parameter, and (5) use of adjusted entropy results for the heart rate effect. EntropyAF was trained using the MIT-BIH Atrial Fibrillation (AF) database, and tested on the clinical wearable long-term AF recordings. Three previous entropy-based AF detectors were used for comparison: sample entropy (SampEn), fuzzy measure entropy (FuzzyMEn) and coefficient of sample entropy (COSEn). For classifying AF and non-AF rhythms in the MIT-BIH AF database, EntropyAF achieved the highest area under receiver operating characteristic curve (AUC) values of 98.15% when using a 30-beat time window, which was higher than COSEn with AUC of 91.86%. SampEn and FuzzyMEn resulted in much lower AUCs of 74.68% and 79.24% respectively. For classifying AF and non-AF rhythms in the clinical wearable AF database, EntropyAF also generated the largest values of Youden index (77.94%), sensitivity (92.77%), specificity (85.17%), accuracy (87.10%), positive predictivity (68.09%) and negative predictivity (97.18%). COSEn had the second-best accuracy of 78.63%, followed by an accuracy of 65.08% in FuzzyMEn and an accuracy of 59.91% in SampEn. The new proposed EntropyAF also generated highest classification accuracy when using a 12-beat time window. In addition, the results from time cost analysis verified the efficiency of the new EntropyAF. This study showed the better discrimination ability for identifying AF when using EntropyAF method, indicating that it would be useful for the practical clinical wearable AF scanning.

]]>Entropy doi: 10.3390/e20120903

Authors: Ali Chamkha Fatih Selimefendigil Hakan Oztop

In this study, effects of different electrical conductivity models for magneto- hydrodynamic mixed convection of nanofluids in a lid-driven triangular cavity was numerically investigated with a finite element method. Effects of Richardson number and Hartmann number on the convective heat transfer characteristics were analyzed for various electrical conductivity models of nanofluids. Average Nusselt number decreases for higher Hartmann and Richardson numbers. Discrepancies in the local and average heat transfer exist between different electrical conductivity models, which is higher for higher values of Richardson number and Hartmann number. The total entropy generation rate was found reduced with higher values of Richardson number and Hartmann number while discrepancies exist between various electrical conductivity models. When the magnetic field is imposed, different behaviors of entropy generation rate versus solid particle volume fraction curve is obtained and it is dependent upon the range of solid particle volume fraction.

]]>Entropy doi: 10.3390/e20120902

Authors: Guobing Qian Dan Luo Shiyuan Wang

The maximum complex correntropy criterion (MCCC) has been extended to complex domain for dealing with complex-valued data in the presence of impulsive noise. Compared with the correntropy based loss, a kernel risk-sensitive loss (KRSL) defined in kernel space has demonstrated a superior performance surface in the complex domain. However, there is no report regarding the recursive KRSL algorithm in the complex domain. Therefore, in this paper we propose a recursive complex KRSL algorithm called the recursive minimum complex kernel risk-sensitive loss (RMCKRSL). In addition, we analyze its stability and obtain the theoretical value of the excess mean square error (EMSE), which are both supported by simulations. Simulation results verify that the proposed RMCKRSL out-performs the MCCC, generalized MCCC (GMCCC), and traditional recursive least squares (RLS).

]]>Entropy doi: 10.3390/e20120901

Authors: Dora M. Ballesteros Jimmy Peña Diego Renza

Image encryption methods aim to protect content privacy. Typically, they encompass scrambling and diffusion. Every pixel of the image is permuted (scrambling) and its value is transformed according to a key (diffusion). Although several methods have been proposed in the literature, some of them have been cryptanalyzed. In this paper, we present a novel method that deviates the traditional schemes. We use variable length codes based on Collatz conjecture for transforming the content of the image into non-intelligible audio; therefore, scrambling and diffusion processes are performed simultaneously in a non-linear way. With our method, different ciphered audio is obtained every time, and it depends exclusively on the selected key (the size of the key space equal to 8 . 57 &times; 10 506 ). Several tests were performed in order to analyze randomness of the ciphered audio signals and the sensitivity of the key. Firstly, it was found that entropy and the level of disorder of ciphered audio signals are very close to the maximum value of randomness. Secondly, fractal behavior was detected into scatter plots of adjacent samples, altering completely the behavior of natural images. Finally, if the key was slightly modified, the image could not be recovered. With the above results, it was concluded that our method is very useful in image privacy protection applications.

]]>Entropy doi: 10.3390/e20120900

Authors: Fuxiang Zhang Yang Tong Ke Jin Hongbin Bei William J. Weber Yanwen Zhang

In the present study, we have revealed that (NiCoFeCr)100&minus;xPdx (x= 1, 3, 5, 20 atom%) high-entropy alloys (HEAs) have both local- and long-range lattice distortions by utilizing X-ray total scattering, X-ray diffraction, and extended X-ray absorption fine structure methods. The local lattice distortion determined by the lattice constant difference between the local and average structures was found to be proportional to the Pd content. A small amount of Pd-doping (1 atom%) yields long-range lattice distortion, which is demonstrated by a larger (200) lattice plane spacing than the expected value from an average structure, however, the degree of long-range lattice distortion is not sensitive to the Pd concentration. The structural stability of these distorted HEAs under high-pressure was also examined. The experimental results indicate that doping with a small amount of Pd significantly enhances the stability of the fcc phase by increasing the fcc-to-hcp transformation pressure from ~13.0 GPa in NiCoFeCr to 20&ndash;26 GPa in the Pd-doped HEAs and NiCoFeCrPd maintains its fcc lattice up to 74 GPa, the maximum pressure that the current experiments have reached.

]]>Entropy doi: 10.3390/e20120899

Authors: Stéphane Gorsse Oleg Senkov

This study examines one of the limitations of CALPHAD databases when applied to high entropy alloys and complex concentrated alloys. We estimate the level of the thermodynamic description, which is still sufficient to correctly predict thermodynamic properties of quaternary alloy systems, by comparing the results of CALPHAD calculations where quaternary phase space is extrapolated from binary descriptions to those resulting from complete binary and ternary interaction descriptions. Our analysis has shown that the thermodynamic properties of a quaternary alloy can be correctly predicted by direct extrapolation from the respective fully assessed binary systems (i.e., without ternary descriptions) only when (i) the binary miscibility gaps are not present, (ii) binary intermetallic phases are not present or present in a few quantities (i.e., when the system has low density of phase boundaries), and (iii) ternary intermetallic phases are not present. Because the locations of the phase boundaries and possibility of formation of ternary phases are not known when evaluating novel composition space, a higher credibility database is still preferable, while the calculations using lower credibility databases may be questionable and require additional experimental verification. We estimate the level of the thermodynamic description which would be still sufficient to correctly predict thermodynamic properties of quaternary alloy systems. The main factors affecting the accuracy of the thermodynamic predictions in quaternary alloys are identified by comparing the results of CALPHAD calculations where quaternary phase space is extrapolated from binary descriptions to those resulting from ternary system descriptions.

]]>Entropy doi: 10.3390/e20120898

Authors: Claude G. Dufour

The study of dense gases and liquids requires consideration of the interactions between the particles and the correlations created by these interactions. In this article, the N-variable distribution function which maximizes the Uncertainty (Shannon&rsquo;s information entropy) and admits as marginals a set of (N&minus;1)-variable distribution functions, is, by definition, free of N-order correlations. This way to define correlations is valid for stochastic systems described by discrete variables or continuous variables, for equilibrium or non-equilibrium states and correlations of the different orders can be defined and measured. This allows building the grand-canonical expressions of the uncertainty valid for either a dilute gas system or a dense gas system. At equilibrium, for both kinds of systems, the uncertainty becomes identical to the expression of the thermodynamic entropy. Two interesting by-products are also provided by the method: (i) The Kirkwood superposition approximation (ii) A series of generalized superposition approximations. A theorem on the temporal evolution of the relevant uncertainty for molecular systems governed by two-body forces is proved and a conjecture closely related to this theorem sheds new light on the origin of the irreversibility of molecular systems. In this respect, the irreplaceable role played by the three-body interactions is highlighted.

]]>Entropy doi: 10.3390/e20120897

Authors: Yang Liu Limin Wang Minghui Sun

The rapid growth in data makes the quest for highly scalable learners a popular one. To achieve the trade-off between structure complexity and classification accuracy, the k-dependence Bayesian classifier (KDB) allows to represent different number of interdependencies for different data sizes. In this paper, we proposed two methods to improve the classification performance of KDB. Firstly, we use the minimal-redundancy-maximal-relevance analysis, which sorts the predictive features to identify redundant ones. Then, we propose an improved discriminative model selection to select an optimal sub-model by removing redundant features and arcs in the Bayesian network. Experimental results on 40 UCI datasets demonstrate that these two techniques are complementary and the proposed algorithm achieves competitive classification performance, and less classification time than other state-of-the-art Bayesian network classifiers like tree-augmented naive Bayes and averaged one-dependence estimators.

]]>Entropy doi: 10.3390/e20120896

Authors: Igal Sason

This paper provides tight bounds on the R&eacute;nyi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one to one. To that end, a tight lower bound on the R&eacute;nyi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to R&eacute;nyi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources.

]]>Entropy doi: 10.3390/e20120895

Authors: Mohammad Yaghoub Abdollahzadeh Jamalabadi

The excellent thermal characteristics of nanoparticles have increased their application in the field of heat transfer. In this paper, a thermophysical and geometrical parameter study is performed to minimize the total entropy generation of the viscoelastic flow of nanofluid. Entropy generation with respect to volume fraction (&lt;0.04), the Reynolds number (20,000&ndash;100,000), and the diameter of the microchannel (20&ndash;20,000 &mu;m) with the circular cross-section under constant flux are calculated. As is shown, most of the entropy generation owes to heat transfer and by increasing the diameter of the channel, the Bejan number increases. The contribution of heat entropy generation in the microchannel is very poor and the major influence of entropy generation is attributable to friction. The maximum quantity of in-channel entropy generation happens in nanofluids with TiO2, CuO, Cu, and Ag nanoparticles, in turn, despite the fact in the microchannel this behavior is inverted, the minimum entropy generation occurs in nanofluids with CuO, Cu, Ag, and TiO2 nanoparticles, in turn. In the channel and microchannel for all nanofluids except water-TiO2, increasing the volume fraction of nanoparticles decreases entropy generation. In the channel and microchannel the total entropy generation increases by augmentation the Reynolds number.

]]>Entropy doi: 10.3390/e20120894

Authors: Olimpia Lombardi Cristian López

Integrated Information Theory (IIT) intends to provide a principled theoretical approach able to characterize consciousness both quantitatively and qualitatively. By starting off identifying the fundamental properties of experience itself, IIT develops a formal framework that relates those properties to the physical substratum of consciousness. One of the central features of ITT is the role that information plays in the theory. On the one hand, one of the self-evident truths about consciousness is that it is informative. On the other hand, mechanisms and systems of mechanics can contribute to consciousness only if they specify systems&rsquo; intrinsic information. In this paper, we will conceptually analyze the notion of information underlying ITT. Following previous work on the matter, we will particularly argue that information within ITT should be understood in the light of a causal-manipulabilist view of information (L&oacute;pez and Lombardi 2018), conforming to which information is an entity that must be involved in causal links in order to be precisely defined. Those causal links are brought to light by means of interventionist procedures following Woodward&rsquo;s and Pearl&rsquo;s version of the manipulability theories of causation.

]]>Entropy doi: 10.3390/e20120893

Authors: Ying-Hsiu Lin Yin-Yin Liao Chih-Kuang Yeh Kuen-Cheh Yang Po-Hsiang Tsui

Nonalcoholic fatty liver disease (NAFLD) is the leading cause of advanced liver diseases. Fat accumulation in the liver changes the hepatic microstructure and the corresponding statistics of ultrasound backscattered signals. Acoustic structure quantification (ASQ) is a typical model-based method for analyzing backscattered statistics. Shannon entropy, initially proposed in information theory, has been demonstrated as a more flexible solution for imaging and describing backscattered statistics without considering data distribution. NAFLD is a hepatic manifestation of metabolic syndrome (MetS). Therefore, we investigated the association between ultrasound entropy imaging of NAFLD and MetS for comparison with that obtained from ASQ. A total of 394 participants were recruited to undergo physical examinations and blood tests to diagnose MetS. Then, abdominal ultrasound screening of the liver was performed to calculate the ultrasonographic fatty liver indicator (US-FLI) as a measure of NAFLD severity. The ASQ analysis and ultrasound entropy parametric imaging were further constructed using the raw image data to calculate the focal disturbance (FD) ratio and entropy value, respectively. Tertiles were used to split the data of the FD ratio and entropy into three groups for statistical analysis. The correlation coefficient r, probability value p, and odds ratio (OR) were calculated. With an increase in the US-FLI, the entropy value increased (r = 0.713; p &lt; 0.0001) and the FD ratio decreased (r = &ndash;0.630; p &lt; 0.0001). In addition, the entropy value and FD ratio correlated with metabolic indices (p &lt; 0.0001). After adjustment for confounding factors, entropy imaging (OR = 7.91, 95% confidence interval (CI): 0.96&ndash;65.18 for the second tertile; OR = 20.47, 95% CI: 2.48&ndash;168.67 for the third tertile; p = 0.0021) still provided a more significant link to the risk of MetS than did the FD ratio obtained from ASQ (OR = 0.55, 95% CI: 0.27&ndash;1.14 for the second tertile; OR = 0.42, 95% CI: 0.15&ndash;1.17 for the third tertile; p = 0.13). Thus, ultrasound entropy imaging can provide information on hepatic steatosis. In particular, ultrasound entropy imaging can describe the risk of MetS for individuals with NAFLD and is superior to the conventional ASQ technique.

]]>Entropy doi: 10.3390/e20110892

Authors: Bingfeng Wang Xianrui Yao Chu Wang Xiaoyong Zhang Xiaoxia Huang

The equiatomic NiCrFeCoMn high-entropy alloy prepared by arc melting has a single crystallographic structure. Mechanical properties and microstructure of the NiCrFeCoMn high-entropy alloy deformed at high strain rates (900 s&minus;1 to 4600 s&minus;1) were investigated. The yield strength of the NiCrFeCoMn high-entropy alloy is sensitive to the change of high strain rates. Serration behaviors were also observed on the flow stress curves of the alloy deformed at the strain rates ranging from 900 s&minus;1 to 4600 s&minus;1. The Zerilli&ndash;Armstrong constitutive equation can be used to predict the flow stress curves of the NiCrFeCoMn high-entropy alloy. Large amounts of deformation bands led to obvious serration behaviors of the NiCrFeCoMn high-entropy alloy under dynamic loading.

]]>Entropy doi: 10.3390/e20110891

Authors: Teddy Craciunescu Andrea Murari Michela Gelfusa

A new measure for the characterization of interconnected dynamical systems coupling is proposed. The method is based on the representation of time series as weighted cross-visibility networks. The weights are introduced as the metric distance between connected nodes. The structure of the networks, depending on the coupling strength, is quantified via the entropy of the weighted adjacency matrix. The method has been tested on several coupled model systems with different individual properties. The results show that the proposed measure is able to distinguish the degree of coupling of the studied dynamical systems. The original use of the geodesic distance on Gaussian manifolds as a metric distance, which is able to take into account the noise inherently superimposed on the experimental data, provides significantly better results in the calculation of the entropy, improving the reliability of the coupling estimates. The application to the interaction between the El Ni&ntilde;o Southern Oscillation (ENSO) and the Indian Ocean Dipole and to the influence of ENSO on influenza pandemic occurrence illustrates the potential of the method for real-life problems.

]]>Entropy doi: 10.3390/e20110890

Authors: Nicholas Derimow Reza Abbaschian

It has been 14 years since the discovery of the high-entropy alloys (HEAs), an idea of alloying which has reinvigorated materials scientists to explore unconventional alloy compositions and multicomponent alloy systems. Many authors have referred to these alloys as multi-principal element alloys (MPEAs) or complex concentrated alloys (CCAs) in order to place less restrictions on what constitutes an HEA. Regardless of classification, the research is rooted in the exploration of structure-properties and processing relations in these multicomponent alloys with the aim to surpass the physical properties of conventional materials. More recent studies show that some of these alloys undergo liquid phase separation, a phenomenon largely dictated by low entropy of mixing and positive mixing enthalpy. Studies posit that positive mixing enthalpy of the binary and ternary components contribute substantially to the formation of liquid miscibility gaps. The objective of this review is to bring forth and summarize the findings of the experiments which detail liquid phase separation (LPS) in HEAs, MPEAs, and CCAs and to draw parallels between HEAs and the conventional alloy systems which undergo liquid-liquid separation. Positive mixing enthalpy if not compensated by the entropy of mixing will lead to liquid phase separation. It appears that Co, Ni, and Ti promote miscibility in HEAs/CCAs/MPEAs while Cr, V, and Nb will raise the miscibility gap temperature and increase LPS. Moreover, addition of appropriate amounts of Ni to CoCrCu eliminates immiscibility, such as in cases of dendritically solidifying CoCrCuNi, CoCrCuFeNi, and CoCrCuMnNi.

]]>Entropy doi: 10.3390/e20110889

Authors: Sanghita Mridha Mageshwari Komarasamy Sanjit Bhowmick Rajiv S. Mishra Sundeep Mukherjee

High entropy alloys (HEAs) have attracted widespread interest due to their unique properties at many different length-scales. Here, we report the fabrication of nanocrystalline (NC) Al0.1CoCrFeNi high entropy alloy and subsequent small-scale plastic deformation behavior via nano-pillar compression tests. Exceptional strength was realized for the NC HEA compared to pure Ni of similar grain sizes. Grain boundary mediated deformation mechanisms led to high strain rate sensitivity of flow stress in the nanocrystalline HEA.

]]>Entropy doi: 10.3390/e20110888

Authors: Oscar A. Negrete Francisco J. Peña Patricio Vargas

In this work, we report the magnetocaloric effect (MCE) for an electron interacting with an antidot, under the effect of an Aharonov-Bohm flux (AB-flux) subjected to a parabolic confinement potential. We use the Bogachek and Landman model, which additionally allows the study of quantum dots with Fock-Darwin energy levels for vanishing antidot radius and AB-flux. We find that AB-flux strongly controls the oscillatory behaviour of the MCE, thus acting as a control parameter for the cooling or heating of the magnetocaloric effect. We propose a way to detect AB-flux by measuring temperature differences.

]]>Entropy doi: 10.3390/e20110887

Authors: Petros Gkotsis Emanuele Pugliese Antonio Vezzani

In this work we use clustering techniques to identify groups of firms competing in similar technological markets. Our clustering properly highlights technological similarities grouping together firms normally classified in different industrial sectors. Technological development leads to a continuous changing structure of industries and firms. For this reason, we propose a data driven approach to classify firms together allowing for fast adaptation of the classification to the changing technological landscape. In this respect we differentiate from previous taxonomic exercises of industries and innovation which are based on more general common features. In our empirical application, we use patent data as a proxy for the firms&rsquo; capabilities of developing new solutions in different technological fields. On this basis, we extract what we define a Technologically Driven Classification (TDC). In order to validate the result of our exercise we use information theory to look at the amount of information explained by our clustering and the amount of information shared with an industrial classification. All-in-all, our approach provides a good grouping of firms on the basis of their technological capabilities and represents an attractive option to compare firms in the technological space and better characterise competition in technological markets.

]]>Entropy doi: 10.3390/e20110885

Authors: Xiaohan Yang Fan Li Wei Zhang Lijun He

Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain for natural images is proposed. Information entropy is an effective measure of the amount of information in an image. We find the discrete cosine transform coefficient distribution of distorted natural images shows a pulse-shape phenomenon, which directly affects the differences of entropy. Then, a Weibull model is used to fit the distributions of natural and distorted images. This is because the Weibull model sufficiently approximates the pulse-shape phenomenon as well as the sharp-peak and heavy-tail phenomena of natural scene statistics rules. Four features that are related to entropy differences and human visual system are extracted from the Weibull model for three scaling images. Image quality is assessed by the support vector regression method based on the extracted features. This blind Weibull statistics algorithm is thoroughly evaluated using three widely used databases: LIVE, TID2008, and CSIQ. The experimental results show that the performance of the proposed blind Weibull statistics method is highly consistent with that of human visual perception and greater than that of the state-of-the-art blind and full-reference image quality assessment methods in most cases.

]]>Entropy doi: 10.3390/e20110886

Authors: Aldo C. Martínez Aldo Solis Rafael Díaz Hernández Rojas Alfred B. U'Ren Jorge G. Hirsch Isaac Pérez Castillo

Pseudo-random number generators are widely used in many branches of science, mainly in applications related to Monte Carlo methods, although they are deterministic in design and, therefore, unsuitable for tackling fundamental problems in security and cryptography. The natural laws of the microscopic realm provide a fairly simple method to generate non-deterministic sequences of random numbers, based on measurements of quantum states. In practice, however, the experimental devices on which quantum random number generators are based are often unable to pass some tests of randomness. In this review, we briefly discuss two such tests, point out the challenges that we have encountered in experimental implementations and finally present a fairly simple method that successfully generates non-deterministic maximally random sequences.

]]>Entropy doi: 10.3390/e20110884

Authors: Tingyu Zhang Ling Han Wei Chen Himan Shahabi

The main purpose of the present study is to apply three classification models, namely, the index of entropy (IOE) model, the logistic regression (LR) model, and the support vector machine (SVM) model by radial basis function (RBF), to produce landslide susceptibility maps for the Fugu County of Shaanxi Province, China. Firstly, landslide locations were extracted from field investigation and aerial photographs, and a total of 194 landslide polygons were transformed into points to produce a landslide inventory map. Secondly, the landslide points were randomly split into two groups (70/30) for training and validation purposes, respectively. Then, 10 landslide explanatory variables, such as slope aspect, slope angle, altitude, lithology, mean annual precipitation, distance to roads, distance to rivers, distance to faults, land use, and normalized difference vegetation index (NDVI), were selected and the potential multicollinearity problems between these factors were detected by the Pearson Correlation Coefficient (PCC), the variance inflation factor (VIF), and tolerance (TOL). Subsequently, the landslide susceptibility maps for the study region were obtained using the IOE model, the LR&ndash;IOE, and the SVM&ndash;IOE model. Finally, the performance of these three models was verified and compared using the receiver operating characteristics (ROC) curve. The success rate results showed that the LR&ndash;IOE model has the highest accuracy (90.11%), followed by the IOE model (87.43%) and the SVM&ndash;IOE model (86.53%). Similarly, the AUC values also showed that the prediction accuracy expresses a similar result, with the LR&ndash;IOE model having the highest accuracy (81.84%), followed by the IOE model (76.86%) and the SVM&ndash;IOE model (76.61%). Thus, the landslide susceptibility map (LSM) for the study region can provide an effective reference for the Fugu County government to properly address land planning and mitigate landslide risk.

]]>Entropy doi: 10.3390/e20110883

Authors: Angelica Sbardella Emanuele Pugliese Andrea Zaccaria Pasquale Scaramozzino

Development and growth are complex and tumultuous processes. Modern economic growth theories identify some key determinants of economic growth. However, the relative importance of the determinants remains unknown, and additional variables may help clarify the directions and dimensions of the interactions. The novel stream of literature on economic complexity goes beyond aggregate measures of productive inputs and considers instead a more granular and structural view of the productive possibilities of countries, i.e., their capabilities. Different endowments of capabilities are crucial ingredients in explaining differences in economic performances. In this paper we employ economic fitness, a measure of productive capabilities obtained through complex network techniques. Focusing on the combined roles of fitness and some more traditional drivers of growth&mdash;GDP per capita, capital intensity, employment ratio, life expectancy, human capital and total factor productivity&mdash;we build a bridge between economic growth theories and the economic complexity literature. Our findings show that fitness plays a crucial role in fostering economic growth and, when it is included in the analysis, can be either complementary to traditional drivers of growth or can completely overshadow them. Notably, for the most complex countries, which have the most diversified export baskets and the largest endowments of capabilities, fitness is complementary to the chosen growth determinants in enhancing economic growth. The empirical findings are in agreement with neoclassical and endogenous growth theories. By contrast, for countries with intermediate and low capability levels, fitness emerges as the key growth driver. This suggests that economic models should account for capabilities; in fact, describing the technological possibilities of countries solely in terms of their production functions may lead to a misinterpretation of the roles of factors.

]]>Entropy doi: 10.3390/e20110882

Authors: Nicholas V. Sarlis Efthimios S. Skordas

A strong earthquake of magnitude M w 6.8 struck Western Greece on 25 October 2018 with an epicenter at 37.515 ∘ N 20.564 ∘ E. It was preceded by an anomalous geolectric signal that was recorded on 2 October 2018 at a measuring station 70 km away from the epicenter. Upon analyzing this signal in natural time, we find that it conforms to the conditions suggested for its identification as precursory Seismic Electric Signal (SES) activity. Notably, the observed lead time of 23 days lies within the range of values that has been very recently identified as being statistically significant for the precursory variations of the electric field of the Earth. Moreover, the analysis in natural time of the seismicity subsequent to the SES activity in the area candidate to suffer this strong earthquake reveals that the criticality conditions were obeyed early in the morning of 18 October 2018, i.e., almost a week before the strong earthquake occurrence, in agreement with earlier findings. Finally, when employing the recent method of nowcasting earthquakes, which is based on natural time, we find an earthquake potential score around 80%.

]]>Entropy doi: 10.3390/e20110881

Authors: Karl Heinz Hoffmann Kathrin Kulmus Christopher Essex Janett Prehl

The entropy production rate is a well established measure for the extent of irreversibility in a process. For irreversible processes, one thus usually expects that the entropy production rate approaches zero in the reversible limit. Fractional diffusion equations provide a fascinating testbed for that intuition in that they build a bridge connecting the fully irreversible diffusion equation with the fully reversible wave equation by a one-parameter family of processes. The entropy production paradox describes the very non-intuitive increase of the entropy production rate as that bridge is passed from irreversible diffusion to reversible waves. This paradox has been established for time- and space-fractional diffusion equations on one-dimensional continuous space and for the Shannon, Tsallis and Renyi entropies. After a brief review of the known results, we generalize it to time-fractional diffusion on a finite chain of points described by a fractional master equation.

]]>Entropy doi: 10.3390/e20110880

Authors: Jerzy W. Grzymala-Busse Teresa Mroczek

As previous research indicates, a multiple-scanning methodology for discretization of numerical datasets, based on entropy, is very competitive. Discretization is a process of converting numerical values of the data records into discrete values associated with numerical intervals defined over the domains of the data records. In multiple-scanning discretization, the last step is the merging of neighboring intervals in discretized datasets as a kind of postprocessing. Our objective is to check how the error rate, measured by tenfold cross validation within the C4.5 system, is affected by such merging. We conducted experiments on 17 numerical datasets, using the same setup of multiple scanning, with three different options for merging: no merging at all, merging based on the smallest entropy, and merging based on the biggest entropy. As a result of the Friedman rank sum test (5% significance level) we concluded that the differences between all three approaches are statistically insignificant. There is no universally best approach. Then, we repeated all experiments 30 times, recording averages and standard deviations. The test of the difference between averages shows that, for a comparison of no merging with merging based on the smallest entropy, there are statistically highly significant differences (with a 1% significance level). In some cases, the smaller error rate is associated with no merging, in some cases the smaller error rate is associated with merging based on the smallest entropy. A comparison of no merging with merging based on the biggest entropy showed similar results. So, our final conclusion was that there are highly significant differences between no merging and merging, depending on the dataset. The best approach should be chosen by trying all three approaches.

]]>Entropy doi: 10.3390/e20110878

Authors: Qing Wang Zhen Li Shujie Pang Xiaona Li Chuang Dong Peter K. Liaw

High-performance conventional engineering materials (including Al alloys, Mg alloys, Cu alloys, stainless steels, Ni superalloys, etc.) and newly-developed high entropy alloys are all compositionally-complex alloys (CCAs). In these CCA systems, the second-phase particles are generally precipitated in their solid-solution matrix, in which the precipitates are diverse and can result in different strengthening effects. The present work aims at generalizing the precipitation behavior and precipitation strengthening in CCAs comprehensively. First of all, the morphology evolution of second-phase particles and precipitation strengthening mechanisms are introduced. Then, the precipitation behaviors in diverse CCA systems are illustrated, especially the coherent precipitation. The relationship between the particle morphology and strengthening effectiveness is discussed. It is addressed that the challenge in the future is to design the stable coherent microstructure in different solid-solution matrices, which will be the most effective approach for the enhancement of alloy strength.

]]>Entropy doi: 10.3390/e20110879

Authors: Hugo Gabriel Eyherabide Inés Samengo

The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity&mdash;the encoding phase&mdash;and subsequently transforms such activity into adequate responses to the original stimuli&mdash;the decoding phase. Several information-theoretical methods have been proposed to assess the relevance of individual response features, as for example, the spike count of a given neuron, or the amount of correlation in the activity of two cells. These methods work under the premise that the relevance of a feature is reflected in the information loss that is induced by eliminating the feature from the response. The alternative methods differ in the procedure by which the tested feature is removed, and the algorithm with which the lost information is calculated. Here we compare these methods, and show that more often than not, each method assigns a different relevance to the tested feature. We demonstrate that the differences are both quantitative and qualitative, and connect them with the method employed to remove the tested feature, as well as the procedure to calculate the lost information. By studying a collection of carefully designed examples, and working on analytic derivations, we identify the conditions under which the relevance of features diagnosed by different methods can be ranked, or sometimes even equated. The condition for equality involves both the amount and the type of information contributed by the tested feature. We conclude that the quest for relevant response features is more delicate than previously thought, and may yield to multiple answers depending on methodological subtleties.

]]>Entropy doi: 10.3390/e20110877

Authors: Marian Kupczynski

Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer simulations of Bell tests allow people to study the different ways in which the experimental data might have been created. They also allow for the generation of various counterfactual experiments’ outcomes, such as repeated or simultaneous measurements performed in different settings on the same “photon-pair”, and so forth. They allow for the reinforcing or relaxing of CFD compliance and/or for studying the impact of various “photon identification procedures”, mimicking those used in real experiments. Data samples consistent with quantum predictions may be generated by using a specific setting-dependent identification procedure. It reflects the active role of instruments during the measurement process. Each of the setting-dependent data samples are consistent with specific setting-dependent probabilistic models which may not be deduced using non-contextual local realistic or stochastic hidden variables. In this paper, we will be discussing the results of these simulations. Since the data samples are generated in a locally causal way, these simulations provide additional strong arguments for closing the door on quantum nonlocality.

]]>Entropy doi: 10.3390/e20110876

Authors: Stanisław Kukla Urszula Siedlecka

In this paper, an investigation of the maximum temperature propagation in a finite medium is presented. The heat conduction in the medium was modelled by using a single-phase-lag equation with fractional Caputo derivatives. The formulation and solution of the problem concern the heat conduction in a slab, a hollow cylinder, and a hollow sphere, which are subjected to a heat source represented by the Robotnov function and a harmonically varying ambient temperature. The problem with time-dependent Robin and homogenous Neumann boundary conditions has been solved by using an eigenfunction expansion method and the Laplace transform technique. The solution of the heat conduction problem was used for determination of the maximum temperature trajectories. The trajectories and propagation speeds of the temperature maxima in the medium depend on the order of fractional derivatives occurring in the heat conduction model. These dependencies for the heat conduction in the hollow cylinder have been numerically investigated.

]]>Entropy doi: 10.3390/e20110875

Authors: Sebastian Deffner

Recent experimental breakthroughs produced the first nano heat engines that have the potential to harness quantum resources. An instrumental question is how their performance measures up against the efficiency of classical engines. For single ion engines undergoing quantum Otto cycles it has been found that the efficiency at maximal power is given by the Curzon&ndash;Ahlborn efficiency. This is rather remarkable as the Curzon&ndash;Alhbron efficiency was originally derived for endoreversible Carnot cycles. Here, we analyze two examples of endoreversible Otto engines within the same conceptual framework as Curzon and Ahlborn&rsquo;s original treatment. We find that for endoreversible Otto cycles in classical harmonic oscillators the efficiency at maximal power is, indeed, given by the Curzon&ndash;Ahlborn efficiency. However, we also find that the efficiency of Otto engines made of quantum harmonic oscillators is significantly larger.

]]>Entropy doi: 10.3390/e20110874

Authors: Miguel Melgarejo Nelson Obregon

Information production in both space and time has been highlighted as one of the elements that shapes the footprint of complexity in natural and socio-technical systems. However, information production in urban crime has barely been studied. This work copes with this problem by using multifractal analysis to characterize the spatial information scaling in urban crime reports and nonlinear processing tools to study the temporal behavior of this scaling. Our results suggest that information scaling in urban crime exhibits dynamics that evolve in low-dimensional chaotic attractors, and this can be observed in several spatio-temporal scales, although some of them are more favorable than others. This evidence has practical implications in terms of defining the characteristic scales to approach urban crime from available data and supporting theoretical perspectives about the complexity of urban crime.

]]>Entropy doi: 10.3390/e20110873

Authors: Zhe Wu Qiang Zhang Lixin Wang Lifeng Cheng Jingbo Zhou

It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as modal mixing problems, in the existing Ensemble Empirical Mode Decomposition (EEMD) time&ndash;frequency analysis methods. To quantitatively study the nonlinear synchronous coupling characteristics and information transfer characteristics of rotating machinery fault signals between different frequency scales under the influence of complex and nonlinear interference signals, a new nonlinear signal processing method&mdash;the harmonic assisted multivariate empirical mode decomposition method (HA-MEMD)&mdash;is proposed in this paper. By adding additional high-frequency harmonic-assisted channels and reducing them, the decomposing precision of the Intrinsic Mode Function (IMF) can be effectively improved, and the phenomenon of mode aliasing can be mitigated. Analysis results of the simulated signals prove the effectiveness of this method. By combining HA-MEMD with the transfer entropy algorithm and introducing signal processing of the rotating machinery, a fault detection method of rotating machinery based on high-frequency harmonic-assisted multivariate empirical mode decomposition-transfer entropy (HA-MEMD-TE) was established. The main features of the mechanical transmission system were extracted by the high-frequency harmonic-assisted multivariate empirical mode decomposition method, and the signal, after noise reduction, was used for the transfer entropy calculation. The evaluation index of the rotating machinery state based on HA-MEMD-TE was established to quantitatively describe the degree of nonlinear coupling between signals to effectively evaluate and diagnose the operating state of the mechanical system. By adding noise to different signal-to-noise ratios, the fault detection ability of HA-MEMD-TE method in the background of strong noise is investigated, which proves that the method has strong reliability and robustness. In this paper, transfer entropy is applied to the fault diagnosis field of rotating machinery, which provides a new effective method for early fault diagnosis and performance degradation-state recognition of rotating machinery, and leads to relevant research conclusions.

]]>Entropy doi: 10.3390/e20110872

Authors: Zhong Li Chenxu Wang Linye Yu Yong Gu Minxiang Pan Xiaohua Tan Hui Xu

The present work exhibits the effects of Sn addition on the magnetic properties and microstructure of FeCoNi(CuAl)0.8Snx (0 &le; x &le; 0.10) high-entropy alloys (HEAs). The results show all the samples consist of a mixed structure of face-centered-cubic (FCC) phase and body-centered-cubic (BCC) phase. The addition of Sn promotes the formation of BCC phase, and it also affects the shape of Cu-rich nano-precipitates in BCC matrix. It also shows that the Curie temperatures (Tc) of the FCC phase and the saturation magnetization (Ms) of the FeCoNi(CuAl)0.8Snx (0 &le; x &le; 0.10) HEAs increase greatly while the remanence (Br) decreases after the addition of Sn into FeCoNi(CuAl)0.8 HEA. The thermomagnetic curves indicate that the phases of the FeCoNi(CuAl)0.8Snx (0 &le; x &le; 0.10) HEAs will transform from FCC with low Tc to BCC phase with high Tc at temperature of 600&ndash;700 K. This work provides a new idea for FeCoNi(CuAl)0.8Snx (0 &le; x &le; 0.10) HEAs for their potential application as soft magnets to be used at high temperatures.

]]>Entropy doi: 10.3390/e20110871

Authors: David Cuesta-Frau Daniel Novák Vacláv Burda Antonio Molina-Picó Borja Vargas Milos Mraz Petra Kavalkova Marek Benes Martin Haluzik

This paper analyses the performance of SampEn and one of its derivatives, Fuzzy Entropy (FuzzyEn), in the context of artifacted blood glucose time series classification. This is a difficult and practically unexplored framework, where the availability of more sensitive and reliable measures could be of great clinical impact. Although the advent of new blood glucose monitoring technologies may reduce the incidence of the problems stated above, incorrect device or sensor manipulation, patient adherence, sensor detachment, time constraints, adoption barriers or affordability can still result in relatively short and artifacted records, as the ones analyzed in this paper or in other similar works. This study is aimed at characterizing the changes induced by such artifacts, enabling the arrangement of countermeasures in advance when possible. Despite the presence of these disturbances, results demonstrate that SampEn and FuzzyEn are sufficiently robust to achieve a significant classification performance, using records obtained from patients with duodenal-jejunal exclusion. The classification results, in terms of area under the ROC of up to 0.9, with several tests yielding AUC values also greater than 0.8, and in terms of a leave-one-out average classification accuracy of 80%, confirm the potential of these measures in this context despite the presence of artifacts, with SampEn having slightly better performance than FuzzyEn.

]]>Entropy doi: 10.3390/e20110870

Authors: Grace Villacrés Tobias Koch Aydin Sezgin Gonzalo Vazquez-Vilar

This paper studies a bursty interference channel, where the presence/absence of interference is modeled by a block-i.i.d. Bernoulli process that stays constant for a duration of T symbols (referred to as coherence block) and then changes independently to a new state. We consider both a quasi-static setup, where the interference state remains constant during the whole transmission of the codeword, and an ergodic setup, where a codeword spans several coherence blocks. For the quasi-static setup, we study the largest rate of a coding strategy that provides reliable communication at a basic rate and allows an increased (opportunistic) rate when there is no interference. For the ergodic setup, we study the largest achievable rate. We study how non-causal knowledge of the interference state, referred to as channel-state information (CSI), affects the achievable rates. We derive converse and achievability bounds for (i) local CSI at the receiver side only; (ii) local CSI at the transmitter and receiver side; and (iii) global CSI at all nodes. Our bounds allow us to identify when interference burstiness is beneficial and in which scenarios global CSI outperforms local CSI. The joint treatment of the quasi-static and ergodic setup further allows for a thorough comparison of these two setups.

]]>Entropy doi: 10.3390/e20110868

Authors: Jie Liu Zhao Duan

In this study, a comparative analysis of the statistical index (SI), index of entropy (IOE) and weights of evidence (WOE) models was introduced to landslide susceptibility mapping, and the performance of the three models was validated and systematically compared. As one of the most landslide-prone areas in Shaanxi Province, China, Shangnan County was selected as the study area. Firstly, a series of reports, remote sensing images and geological maps were collected, and field surveys were carried out to prepare a landslide inventory map. A total of 348 landslides were identified in study area, and they were reclassified as a training dataset (70% = 244 landslides) and testing dataset (30% = 104 landslides) by random selection. Thirteen conditioning factors were then employed. Corresponding thematic data layers and landslide susceptibility maps were generated based on ArcGIS software. Finally, the area under the curve (AUC) values were calculated for the training dataset and the testing dataset in order to validate and compare the performance of the three models. For the training dataset, the AUC plots showed that the WOE model had the highest accuracy rate of 76.05%, followed by the SI model (74.67%) and the IOE model (71.12%). In the case of the testing dataset, the prediction accuracy rates for the SI, IOE and WOE models were 73.75%, 63.89%, and 75.10%, respectively. It can be concluded that the WOE model had the best prediction capacity for landslide susceptibility mapping in Shangnan County. The landslide susceptibility map produced by the WOE model had a profound geological and engineering significance in terms of landslide hazard prevention and control in the study area and other similar areas.

]]>Entropy doi: 10.3390/e20110869

Authors: Maurice A. de Gosson

We have shown in previous work that the equivalence of the Heisenberg and Schr&ouml;dinger pictures of quantum mechanics requires the use of the Born and Jordan quantization rules. In the present work we give further evidence that the Born&ndash;Jordan rule is the correct quantization scheme for quantum mechanics. For this purpose we use correct short-time approximations to the action functional, initially due to Makri and Miller, and show that these lead to the desired quantization of the classical Hamiltonian.

]]>Entropy doi: 10.3390/e20110867

Authors: Xingbin Liu Di Xiao Cong Liu

Quantum image encryption offers major advantages over its classical counterpart in terms of key space, computational complexity, and so on. A novel double quantum image encryption approach based on quantum Arnold transform (QAT) and qubit random rotation is proposed in this paper, in which QAT is used to scramble pixel positions and the gray information is changed by utilizing random qubit rotation. Actually, the independent random qubit rotation operates once, respectively, in spatial and frequency domains with the help of quantum Fourier transform (QFT). The encryption process accomplishes pixel confusion and diffusion, and finally the noise-like cipher image is obtained. Numerical simulation and theoretical analysis verify that the method is valid and it shows superior performance in security and computational complexity.

]]>Entropy doi: 10.3390/e20110866

Authors: Richard Cant Ayodeji Remi-Omosowon Caroline Langensiepen Ahmad Lotfi

In this paper, a novel approach to the container loading problem using a spatial entropy measure to bias a Monte Carlo Tree Search is proposed. The proposed algorithm generates layouts that achieve the goals of both fitting a constrained space and also having “consistency” or neatness that enables forklift truck drivers to apply them easily to real shipping containers loaded from one end. Three algorithms are analysed. The first is a basic Monte Carlo Tree Search, driven only by the principle of minimising the length of container that is occupied. The second is an algorithm that uses the proposed entropy measure to drive an otherwise random process. The third algorithm combines these two principles and produces superior results to either. These algorithms are then compared to a classical deterministic algorithm. It is shown that where the classical algorithm fails, the entropy-driven algorithms are still capable of providing good results in a short computational time.

]]>