Entropy
http://www.mdpi.com/journal/entropy
Latest open access articles published in Entropy at http://www.mdpi.com/journal/entropy<![CDATA[Entropy, Vol. 18, Pages 310: SU(2) Yang–Mills Theory: Waves, Particles, and Quantum Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/9/310
We elucidate how Quantum Thermodynamics at temperature T emerges from pure and classical S U ( 2 ) Yang–Mills theory on a four-dimensional Euclidean spacetime slice S 1 × R 3 . The concept of a (deconfining) thermal ground state, composed of certain solutions to the fundamental, classical Yang–Mills equation, allows for a unified addressation of both (classical) wave- and (quantum) particle-like excitations thereof. More definitely, the thermal ground state represents the interplay between nonpropagating, periodic configurations which are electric-magnetically (anti)selfdual in a non-trivial way and possess topological charge modulus unity. Their trivial-holonomy versions—Harrington–Shepard (HS) (anti)calorons—yield an accurate a priori estimate of the thermal ground state in terms of spatially coarse-grained centers, each containing one quantum of action ℏ localized at its inmost spacetime point, which induce an inert adjoint scalar field ϕ ( | ϕ | spatio-temporally constant). The field ϕ , in turn, implies an effective pure-gauge configuration, a μ gs , accurately describing HS (anti)caloron overlap. Spatial homogeneity of the thermal ground-state estimate ϕ , a μ gs demands that (anti)caloron centers are densely packed, thus representing a collective departure from (anti)selfduality. Effectively, such a “nervous” microscopic situation gives rise to two static phenomena: finite ground-state energy density ρ gs and pressure P gs with ρ gs = − P gs as well as the (adjoint) Higgs mechanism. The peripheries of HS (anti)calorons are static and resemble (anti)selfdual dipole fields whose apparent dipole moments are determined by | ϕ | and T, protecting them against deformation potentially caused by overlap. Such a protection extends to the spatial density of HS (anti)caloron centers. Thus the vacuum electric permittivity ϵ 0 and magnetic permeability μ 0 , supporting the propagation of wave-like disturbances in the U ( 1 ) Cartan subalgebra of S U ( 2 ) , can be reliably calculated for disturbances which do not probe HS (anti)caloron centers. Both ϵ 0 and μ 0 turn out to be temperature independent in thermal equilibrium but also for an isolated, monochromatic U ( 1 ) wave. HS (anti)caloron centers, on the other hand, react onto wave-like disturbances, which would resolve their spatio-temporal structure, by indeterministic emissions of quanta of energy and momentum. Thermodynamically seen, such events are Boltzmann weighted and occur independently at distinct locations in space and instants in (Minkowskian) time, entailing the Bose–Einstein distribution. Small correlative ramifications associate with effective radiative corrections, e.g., in terms of polarization tensors. We comment on an S U ( 2 ) × S U ( 2 ) based gauge-theory model, describing wave- and particle-like aspects of electromagnetic disturbances within the so far experimentally/observationally investigated spectrum.Entropy2016-08-23189Article10.3390/e180903103101099-43002016-08-23doi: 10.3390/e18090310Ralf Hofmann<![CDATA[Entropy, Vol. 18, Pages 312: Combinatorial Intricacies of Labeled Fano Planes]]>
http://www.mdpi.com/1099-4300/18/9/312
Given a seven-element set X = { 1 , 2 , 3 , 4 , 5 , 6 , 7 } , there are 30 ways to define a Fano plane on it. Let us call a line of such a Fano plane—that is to say an unordered triple from X—ordinary or defective, according to whether the sum of two smaller integers from the triple is or is not equal to the remaining one, respectively. A point of the labeled Fano plane is said to be of the order s, 0 ≤ s ≤ 3 , if there are s defective lines passing through it. With such structural refinement in mind, the 30 Fano planes are shown to fall into eight distinct types. Out of the total of 35 lines, nine ordinary lines are of five different kinds, whereas the remaining 26 defective lines yield as many as ten distinct types. It is shown that no labeled Fano plane can have all points of zero-th order, or feature just one point of order two. A connection with prominent configurations in Steiner triple systems is also pointed out.Entropy2016-08-23189Letter10.3390/e180903123121099-43002016-08-23doi: 10.3390/e18090312Metod Saniga<![CDATA[Entropy, Vol. 18, Pages 272: Sleep Stage Classification Using EEG Signal Analysis: A Comprehensive Survey and New Investigation]]>
http://www.mdpi.com/1099-4300/18/9/272
Sleep specialists often conduct manual sleep stage scoring by visually inspecting the patient’s neurophysiological signals collected at sleep labs. This is, generally, a very difficult, tedious and time-consuming task. The limitations of manual sleep stage scoring have escalated the demand for developing Automatic Sleep Stage Classification (ASSC) systems. Sleep stage classification refers to identifying the various stages of sleep and is a critical step in an effort to assist physicians in the diagnosis and treatment of related sleep disorders. The aim of this paper is to survey the progress and challenges in various existing Electroencephalogram (EEG) signal-based methods used for sleep stage identification at each phase; including pre-processing, feature extraction and classification; in an attempt to find the research gaps and possibly introduce a reasonable solution. Many of the prior and current related studies use multiple EEG channels, and are based on 30 s or 20 s epoch lengths which affect the feasibility and speed of ASSC for real-time applications. Thus, in this paper, we also present a novel and efficient technique that can be implemented in an embedded hardware device to identify sleep stages using new statistical features applied to 10 s epochs of single-channel EEG signals. In this study, the PhysioNet Sleep European Data Format (EDF) Database was used. The proposed methodology achieves an average classification sensitivity, specificity and accuracy of 89.06%, 98.61% and 93.13%, respectively, when the decision tree classifier is applied. Finally, our new method is compared with those in recently published studies, which reiterates the high classification accuracy performance.Entropy2016-08-23189Review10.3390/e180902722721099-43002016-08-23doi: 10.3390/e18090272Khald AboalayonMiad FaezipourWafaa AlmuhammadiSaeid Moslehpour<![CDATA[Entropy, Vol. 18, Pages 307: Weighted-Permutation Entropy Analysis of Resting State EEG from Diabetics with Amnestic Mild Cognitive Impairment]]>
http://www.mdpi.com/1099-4300/18/8/307
Diabetes is a significant public health issue as it increases the risk for dementia and Alzheimer’s disease (AD). In this study, we aim to investigate whether weighted-permutation entropy (WPE) and permutation entropy (PE) of resting-state EEG (rsEEG) could be applied as potential objective biomarkers to distinguish type 2 diabetes patients with amnestic mild cognitive impairment (aMCI) from those with normal cognitive function. rsEEG series were acquired from 28 patients with type 2 diabetes (16 aMCI patients and 12 controls), and neuropsychological assessments were performed. The rsEEG signals were analysed using WPE and PE methods. The correlations between the PE or WPE of the rsEEG and the neuropsychological assessments were analysed as well. The WPE in the right temporal (RT) region of the aMCI diabetics was lower than the controls, and the WPE was significantly positively correlated to the scores of the Auditory Verbal Learning Test (AVLT) (AVLT-Immediate recall, AVLT-Delayed recall, AVLT-Delayed recognition) and the Wechsler Adult Intelligence Scale Digit Span Test (WAIS-DST). These findings were not obtained with PE. We concluded that the WPE of rsEEG recordings could distinguish aMCI diabetics from normal cognitive function diabetic controls among the current sample of diabetic patients. Thus, the WPE could be a potential index for assisting diagnosis of aMCI in type 2 diabetes.Entropy2016-08-22188Article10.3390/e180803073071099-43002016-08-22doi: 10.3390/e18080307Zhijie BianGaoxiang OuyangZheng LiQiuli LiLei WangXiaoli Li<![CDATA[Entropy, Vol. 18, Pages 401: Exploring the Key Risk Factors for Application of Cloud Computing in Auditing]]>
http://www.mdpi.com/1099-4300/18/8/401
In the cloud computing information technology environment, cloud computing has some advantages such as lower cost, immediate access to hardware resources, lower IT barriers to innovation, higher scalability, etc., but for the financial audit information flow and processing in the cloud system, CPA (Certified Public Accountant) firms need special considerations, for example: system problems, information security and other related issues. Auditing cloud computing applications is the future trend in the CPA firms, given this issue is an important factor for them and very few studies have been conducted to investigate this issue; hence this study seeks to explore the key risk factors for the cloud computing and audit considerations. The dimensions/perspectives of the application of cloud computing audit considerations are huge and cover many criteria/factors. These risk factors are becoming increasingly complex, and interdependent. If the dimensions could be established, the mutually influential relations of the dimensions and criteria determined, and the current execution performance established; a prioritized improvement strategy designed could be constructed to use as a reference for CPA firm management decision making; as well as provide CPA firms with a reference for build auditing cloud computing systems. Empirical results show that key risk factors to consider when using cloud computing in auditing are, in order of priority for improvement: Operations (D), Automating user provisioning (C), Technology Risk (B) and Protection system (A).Entropy2016-08-22188Article10.3390/e180804014011099-43002016-08-22doi: 10.3390/e18080401Kuang-Hua HuFu-Hsiang ChenWei-Jhou We<![CDATA[Entropy, Vol. 18, Pages 403: Interplay between Lattice Distortions, Vibrations and Phase Stability in NbMoTaW High Entropy Alloys]]>
http://www.mdpi.com/1099-4300/18/8/403
Refractory high entropy alloys (HEA), such as BCC NbMoTaW, represent a promising materials class for next-generation high-temperature applications, due to their extraordinary mechanical properties. A characteristic feature of HEAs is the formation of single-phase solid solutions. For BCC NbMoTaW, recent computational studies revealed, however, a B2(Mo,W;Nb,Ta)-ordering at ambient temperature. This ordering could impact many materials properties, such as thermodynamic, mechanical, or diffusion properties, and hence be of relevance for practical applications. In this work, we theoretically address how the B2-ordering impacts thermodynamic properties of BCC NbMoTaW and how the predicted ordering temperature itself is affected by vibrations, electronic excitations, lattice distortions, and relaxation energies.Entropy2016-08-20188Article10.3390/e180804034031099-43002016-08-20doi: 10.3390/e18080403Fritz KörmannMarcel Sluiter<![CDATA[Entropy, Vol. 18, Pages 402: Analytical Solutions of the Electrical RLC Circuit via Liouville–Caputo Operators with Local and Non-Local Kernels]]>
http://www.mdpi.com/1099-4300/18/8/402
In this work we obtain analytical solutions for the electrical RLC circuit model defined with Liouville–Caputo, Caputo–Fabrizio and the new fractional derivative based in the Mittag-Leffler function. Numerical simulations of alternative models are presented for evaluating the effectiveness of these representations. Different source terms are considered in the fractional differential equations. The classical behaviors are recovered when the fractional order α is equal to 1.Entropy2016-08-20188Article10.3390/e180804024021099-43002016-08-20doi: 10.3390/e18080402José Gómez-AguilarVictor Morales-DelgadoMarco Taneco-HernándezDumitru BaleanuRicardo Escobar-JiménezMaysaa Al Qurashi<![CDATA[Entropy, Vol. 18, Pages 400: Optimal Noise Benefit in Composite Hypothesis Testing under Different Criteria]]>
http://www.mdpi.com/1099-4300/18/8/400
The detectability for a noise-enhanced composite hypothesis testing problem according to different criteria is studied. In this work, the noise-enhanced detection problem is formulated as a noise-enhanced classical Neyman–Pearson (NP), Max–min, or restricted NP problem when the prior information is completely known, completely unknown, or partially known, respectively. Next, the detection performances are compared and the feasible range of the constraint on the minimum detection probability is discussed. Under certain conditions, the noise-enhanced restricted NP problem is equivalent to a noise-enhanced classical NP problem with modified prior distribution. Furthermore, the corresponding theorems and algorithms are given to search the optimal additive noise in the restricted NP framework. In addition, the relationship between the optimal noise-enhanced average detection probability and the constraint on the minimum detection probability is explored. Finally, numerical examples and simulations are provided to illustrate the theoretical results.Entropy2016-08-19188Article10.3390/e180804004001099-43002016-08-19doi: 10.3390/e18080400Shujun LiuTing YangMingchun TangHongqing LiuKui ZhangXinzheng Zhang<![CDATA[Entropy, Vol. 18, Pages 309: Potential of Entropic Force in Markov Systems with Nonequilibrium Steady State, Generalized Gibbs Function and Criticality]]>
http://www.mdpi.com/1099-4300/18/8/309
In this paper, we revisit the notion of the “minus logarithm of stationary probability” as a generalized potential in nonequilibrium systems and attempt to illustrate its central role in an axiomatic approach to stochastic nonequilibrium thermodynamics of complex systems. It is demonstrated that this quantity arises naturally through both monotonicity results of Markov processes and as the rate function when a stochastic process approaches a deterministic limit. We then undertake a more detailed mathematical analysis of the consequences of this quantity, culminating in a necessary and sufficient condition for the criticality of stochastic systems. This condition is then discussed in the context of recent results about criticality in biological systems.Entropy2016-08-18188Article10.3390/e180803093091099-43002016-08-18doi: 10.3390/e18080309Lowell ThompsonHong Qian<![CDATA[Entropy, Vol. 18, Pages 308: Soft Magnetic Properties of High-Entropy Fe-Co-Ni-Cr-Al-Si Thin Films]]>
http://www.mdpi.com/1099-4300/18/8/308
Soft magnetic properties of Fe-Co-Ni-Al-Cr-Si thin films were studied. As-deposited Fe-Co-Ni-Al-Cr-Si nano-grained thin films showing no magnetic anisotropy were subjected to field-annealing at different temperatures to induce magnetic anisotropy. Optimized magnetic and electrical properties of Fe-Co-Ni-Al-Cr-Si films annealed at 200 °C are saturation magnetization 9.13 × 105 A/m, coercivity 79.6 A/m, out-of-plane uniaxial anisotropy field 1.59 × 103 A/m, and electrical resistivity 3.75 μΩ·m. Based on these excellent properties, we employed such films to fabricate magnetic thin film inductor. The performance of the high entropy alloy thin film inductors is superior to that of air core inductor.Entropy2016-08-18188Article10.3390/e180803083081099-43002016-08-18doi: 10.3390/e18080308Pei-Chung LinChun-Yang ChengJien-Wei YehTsung-Shune Chin<![CDATA[Entropy, Vol. 18, Pages 306: Contact-Free Detection of Obstructive Sleep Apnea Based on Wavelet Information Entropy Spectrum Using Bio-Radar]]>
http://www.mdpi.com/1099-4300/18/8/306
Judgment and early danger warning of obstructive sleep apnea (OSA) is meaningful to the diagnosis of sleep illness. This paper proposed a novel method based on wavelet information entropy spectrum to make an apnea judgment of the OSA respiratory signal detected by bio-radar in wavelet domain. It makes full use of the features of strong irregularity and disorder of respiratory signal resulting from the brain stimulation by real, low airflow during apnea. The experimental results demonstrated that the proposed method is effective for detecting the occurrence of sleep apnea and is also able to detect some apnea cases that the energy spectrum method cannot. Ultimately, the comprehensive judgment accuracy resulting from 10 groups of OSA data is 93.1%, which is promising for the non-contact aided-diagnosis of the OSA.Entropy2016-08-18188Article10.3390/e180803063061099-43002016-08-18doi: 10.3390/e18080306Fugui QiChuantao LiShuaijie WangHua ZhangJianqi WangGuohua Lu<![CDATA[Entropy, Vol. 18, Pages 305: An Efficient Method to Construct Parity-Check Matrices for Recursively Encoding Spatially Coupled LDPC Codes †]]>
http://www.mdpi.com/1099-4300/18/8/305
Spatially coupled low-density parity-check (LDPC) codes have attracted considerable attention due to their promising performance. Recursive encoding of the codes with low delay and low complexity has been proposed in the literature but with constraints or restrictions. In this manuscript we propose an efficient method to construct parity-check matrices for recursively encoding spatially coupled LDPC codes with arbitrarily chosen node degrees. A general principle is proposed, which provides feasible and practical guidance for the construction of parity-check matrices. According to the specific structure of the matrix, each parity bit at a coupling position is jointly determined by the information bits at the current position and the encoded bits at former positions. Performance analysis in terms of design rate and density evolution has been presented. It can be observed that, in addition to the feature of recursive encoding, selected code structures constructed by the newly proposed method may lead to better belief-propagation thresholds than the conventional structures. Finite-length simulation results are provided as well, which verify the theoretical analysis.Entropy2016-08-17188Article10.3390/e180803053051099-43002016-08-17doi: 10.3390/e18080305Zhongwei SiSijie WangJunyang Ma<![CDATA[Entropy, Vol. 18, Pages 304: Entropy as a Metric Generator of Dissipation in Complete Metriplectic Systems]]>
http://www.mdpi.com/1099-4300/18/8/304
This lecture is a short review on the role entropy plays in those classical dissipative systems whose equations of motion may be expressed via a Leibniz Bracket Algebra (LBA). This means that the time derivative of any physical observable f of the system is calculated by putting this f in a “bracket” together with a “special observable” F, referred to as a Leibniz generator of the dynamics. While conservative dynamics is given an LBA formulation in the Hamiltonian framework, so that F is the Hamiltonian H of the system that generates the motion via classical Poisson brackets or quantum commutation brackets, an LBA formulation can be given to classical dissipative dynamics through the Metriplectic Bracket Algebra (MBA): the conservative component of the dynamics is still generated via Poisson algebra by the total energy H, while S, the entropy of the degrees of freedom statistically encoded in friction, generates dissipation via a metric bracket. The motivation of expressing through a bracket algebra and a motion-generating function F is to endow the theory of the system at hand with all the powerful machinery of Hamiltonian systems in terms of symmetries that become evident and readable. Here a (necessarily partial) overview of the types of systems subject to MBA formulation is presented, and the physical meaning of the quantity S involved in each is discussed. Here the aim is to review the different MBAs for isolated systems in a synoptic way. At the end of this collection of examples, the fact that dissipative dynamics may be constructed also in the absence of friction with microscopic degrees of freedom is stressed. This reasoning is a hint to introduce dissipation at a more fundamental level.Entropy2016-08-16188Review10.3390/e180803043041099-43002016-08-16doi: 10.3390/e18080304Massimo Materassi<![CDATA[Entropy, Vol. 18, Pages 303: A Geographically Temporal Weighted Regression Approach with Travel Distance for House Price Estimation]]>
http://www.mdpi.com/1099-4300/18/8/303
Previous studies have demonstrated that non-Euclidean distance metrics can improve model fit in the geographically weighted regression (GWR) model. However, the GWR model often considers spatial nonstationarity and does not address variations in local temporal issues. Therefore, this paper explores a geographically temporal weighted regression (GTWR) approach that accounts for both spatial and temporal nonstationarity simultaneously to estimate house prices based on travel time distance metrics. Using house price data collected between 1980 and 2016, the house price response and explanatory variables are then modeled using both the GWR and the GTWR approaches. Comparing the GWR model with Euclidean and travel distance metrics, the GTWR model with travel distance obtains the highest value for the coefficient of determination ( R 2 ) and the lowest values for the Akaike information criterion (AIC). The results show that the GTWR model provides a relatively high goodness of fit and sufficient space-time explanatory power with non-Euclidean distance metrics. The results of this study can be used to formulate more effective policies for real estate management.Entropy2016-08-16188Article10.3390/e180803033031099-43002016-08-16doi: 10.3390/e18080303Jiping LiuYi YangShenghua XuYangyang ZhaoYong WangFuhao Zhang<![CDATA[Entropy, Vol. 18, Pages 301: Thermal Analysis of Shell-and-Tube Thermoacoustic Heat Exchangers]]>
http://www.mdpi.com/1099-4300/18/8/301
Heat exchangers are of key importance in overall performance and commercialization of thermoacoustic devices. The main goal in designing efficient thermoacoustic heat exchangers (TAHXs) is the achievement of the required heat transfer rate in conjunction with low acoustic energy dissipation. A numerical investigation is performed to examine the effects of geometry on both the viscous and thermal-relaxation losses of shell-and-tube TAHXs. Further, the impact of the drive ratio as well as the temperature difference between the oscillating gas and the TAHX tube wall on acoustic energy dissipation are explored. While viscous losses decrease with d i / δ κ , thermal-relaxation losses increase; however, thermal relaxation effects mainly determine the acoustic power dissipated in TAHXs. The results indicate the existence of an optimal configuration for which the acoustic energy dissipation minimizes depending on both the TAHX metal temperature and the drive ratio.Entropy2016-08-16188Article10.3390/e180803013011099-43002016-08-16doi: 10.3390/e18080301Mohammad GholamrezaeiKaveh Ghorbanian<![CDATA[Entropy, Vol. 18, Pages 299: Determining the Entropic Index q of Tsallis Entropy in Images through Redundancy]]>
http://www.mdpi.com/1099-4300/18/8/299
The Boltzmann–Gibbs and Tsallis entropies are essential concepts in statistical physics, which have found multiple applications in many engineering and science areas. In particular, we focus our interest on their applications to image processing through information theory. We present in this article a novel numeric method to calculate the Tsallis entropic index q characteristic to a given image, considering the image as a non-extensive system. The entropic index q is calculated through q-redundancy maximization, which is a methodology that comes from information theory. We find better results in the image processing in the grayscale by using the Tsallis entropy and thresholding q instead of the Shannon entropy.Entropy2016-08-15188Article10.3390/e180802992991099-43002016-08-15doi: 10.3390/e18080299Abdiel Ramírez-ReyesAlejandro Hernández-MontoyaGerardo Herrera-CorralIsmael Domínguez-Jiménez<![CDATA[Entropy, Vol. 18, Pages 300: Assessing the Exergy Costs of a 332-MW Pulverized Coal-Fired Boiler]]>
http://www.mdpi.com/1099-4300/18/8/300
In this paper, we analyze the exergy costs of a real large industrial boiler with the aim of improving efficiency. Specifically, the 350-MW front-fired, natural circulation, single reheat and balanced draft coal-fired boiler forms part of a 1050-MW conventional power plant located in Spain. We start with a diagram of the power plant, followed by a formulation of the exergy cost allocation problem to determine the exergy cost of the product of the boiler as a whole and the expenses of the individual components and energy streams. We also define a productive structure of the system. Furthermore, a proposal for including the exergy of radiation is provided in this study. Our results show that the unit exergy cost of the product of the boiler goes from 2.352 to 2.5, and that the maximum values are located in the ancillary electrical devices, such as induced-draft fans and coil heaters. Finally, radiation does not have an effect on the electricity cost, but affects at least 30% of the unit exergy cost of the boiler’s product.Entropy2016-08-15188Article10.3390/e180803003001099-43002016-08-15doi: 10.3390/e18080300Victor Rangel-HernandezCesar Damian-AscencioJuan Belman-FloresAlejandro Zaleta-Aguilar<![CDATA[Entropy, Vol. 18, Pages 302: Heat Transfer and Entropy Generation of Non-Newtonian Laminar Flow in Microchannels with Four Flow Control Structures]]>
http://www.mdpi.com/1099-4300/18/8/302
Flow characteristics and heat transfer performances of carboxymethyl cellulose (CMC) aqueous solutions in the microchannels with flow control structures were investigated in this study. The researches were carried out with various flow rates and concentrations of the CMC aqueous solutions. The results reveal that the pin-finned microchannel has the most uniform temperature distribution on the structured walls, and the average temperature on the structured wall reaches the minimum value in cylinder-ribbed microchannels at the same flow rate and CMC concentration. Moreover, the protruded microchannel obtains the minimum relative Fanning friction factor f/f0, while, the maximum f/f0 is observed in the cylinder-ribbed microchannel. Furthermore, the minimum f/f0 is reached at the cases with CMC2000, and also, the relative Nusselt number Nu/Nu0 of CMC2000 cases is larger than that of other cases in the four structured microchannels. Therefore, 2000 ppm is the recommended concentration of CMC aqueous solutions in all the cases with different flow rates and flow control structures. Pin-finned microchannels are preferred in low flow rate cases, while, V-grooved microchannels have the minimum relative entropy generation S’/S0’ and best thermal performance TP at CMC2000 in high flow rates.Entropy2016-08-12188Article10.3390/e180803023021099-43002016-08-12doi: 10.3390/e18080302Ke YangDi ZhangYonghui XieGongnan Xie<![CDATA[Entropy, Vol. 18, Pages 298: Voice Activity Detection Using Fuzzy Entropy and Support Vector Machine]]>
http://www.mdpi.com/1099-4300/18/8/298
This paper proposes support vector machine (SVM) based voice activity detection using FuzzyEn to improve detection performance under noisy conditions. The proposed voice activity detection (VAD) uses fuzzy entropy (FuzzyEn) as a feature extracted from noise-reduced speech signals to train an SVM model for speech/non-speech classification. The proposed VAD method was tested by conducting various experiments by adding real background noises of different signal-to-noise ratios (SNR) ranging from −10 dB to 10 dB to actual speech signals collected from the TIMIT database. The analysis proves that FuzzyEn feature shows better results in discriminating noise and corrupted noisy speech. The efficacy of the SVM classifier was validated using 10-fold cross validation. Furthermore, the results obtained by the proposed method was compared with those of previous standardized VAD algorithms as well as recently developed methods. Performance comparison suggests that the proposed method is proven to be more efficient in detecting speech under various noisy environments with an accuracy of 93.29%, and the FuzzyEn feature detects speech efficiently even at low SNR levels.Entropy2016-08-12188Article10.3390/e180802982981099-43002016-08-12doi: 10.3390/e18080298R. Johny EltonP. VasukiJ. Mohanalin<![CDATA[Entropy, Vol. 18, Pages 295: Information Theoretical Measures for Achieving Robust Learning Machines]]>
http://www.mdpi.com/1099-4300/18/8/295
Information theoretical measures are used to design, from first principles, an objective function that can drive a learning machine process to a solution that is robust to perturbations in parameters. Full analytic derivations are given and tested with computational examples showing that indeed the procedure is successful. The final solution, implemented by a robust learning machine, expresses a balance between Shannon differential entropy and Fisher information. This is also surprising in being an analytical relation, given the purely numerical operations of the learning machine.Entropy2016-08-12188Article10.3390/e180802952951099-43002016-08-12doi: 10.3390/e18080295Pablo ZegersB. FriedenCarlos AlarcónAlexis Fuentes<![CDATA[Entropy, Vol. 18, Pages 296: Temporal Predictability of Online Behavior in Foursquare]]>
http://www.mdpi.com/1099-4300/18/8/296
With the widespread use of Internet technologies, online behaviors play a more and more important role in humans’ daily lives. Knowing the times when humans perform their next online activities can be quite valuable for developing better online services, which prompts us to wonder whether the times of users’ next online activities are predictable. In this paper, we investigate the temporal predictability in human online activities through exploiting the dataset from the social network Foursquare. Through discretizing the inter-event times of users’ Foursquare activities into symbols, we map each user’s inter-event time sequence to a sequence of inter-event time symbols. By applying the information-theoretic method to the sequences of inter-event time symbols, we show that for a user’s Foursquare activities, knowing the time interval between the current activity and the previous activity decreases the entropy of the time interval between the next activity and current activity, i.e., the time of the user’s next Foursquare activity is predictable. Much of the predictability is explained by the equal-interval repeat; that is, users perform consecutive Foursquare activities with approximately equal time intervals. On the other hand, the unequal-interval preference, i.e., the preference of performing Foursquare activities with a fixed time interval after another given time interval, is also an origin for predictability. Furthermore, our results reveal that the Foursquare activities on weekdays have a higher temporal predictability than those on weekends and that users’ Foursquare activity is more temporally predictable if his/her previous activity is performed in a location that he/she visits more frequently.Entropy2016-08-12188Article10.3390/e180802962961099-43002016-08-12doi: 10.3390/e18080296Wang ChenQiang GaoHuagang Xiong<![CDATA[Entropy, Vol. 18, Pages 293: Characterization of Seepage Velocity beneath a Complex Rock Mass Dam Based on Entropy Theory]]>
http://www.mdpi.com/1099-4300/18/8/293
Owing to the randomness in the fracture flow system, the seepage system beneath a complex rock mass dam is inherently complex and highly uncertain, an investigation of the dam leakage by estimating the spatial distribution of the seepage field by conventional methods is quite difficult. In this paper, the entropy theory, as a relation between the definiteness and probability, is used to probabilistically analyze the characteristics of the seepage system in a complex rock mass dam. Based on the principle of maximum entropy, an equation for the vertical distribution of the seepage velocity in a dam borehole is derived. The achieved distribution is tested and compared with actual field data, and the results show good agreement. According to the entropy of flow velocity in boreholes, the rupture degree of a dam bedrock has been successfully estimated. Moreover, a new sampling scheme is presented. The sampling frequency has a negative correlation with the distance to the site of the minimum velocity, which is preferable to the traditional one. This paper demonstrates the significant advantage of applying the entropy theory for seepage velocity analysis in a complex rock mass dam.Entropy2016-08-11188Article10.3390/e180802932931099-43002016-08-11doi: 10.3390/e18080293Xixi ChenJiansheng ChenTao WangHuaidong ZhouLinghua Liu<![CDATA[Entropy, Vol. 18, Pages 294: Understanding Gating Operations in Recurrent Neural Networks through Opinion Expression Extraction]]>
http://www.mdpi.com/1099-4300/18/8/294
Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.Entropy2016-08-11188Article10.3390/e180802942941099-43002016-08-11doi: 10.3390/e18080294Xin WangYuanchao LiuMing LiuChengjie SunXiaolong Wang<![CDATA[Entropy, Vol. 18, Pages 292: On Multi-Scale Entropy Analysis of Order-Tracking Measurement for Bearing Fault Diagnosis under Variable Speed]]>
http://www.mdpi.com/1099-4300/18/8/292
The research objective in this paper is to investigate the feasibility and effectiveness of utilizing envelope extraction combining the multi-scale entropy (MSE) analysis for identifying different roller bearing faults. The features were extracted from the angle-domain vibration signals that were measured through the hardware-implemented order-tracking technique, so that the characteristics of bearing defects are not affected by the rotating speed. The envelope analysis was employed to the vibration measurements as well as the selected intrinsic mode function (IMF) that was separated by the empirical mode decomposition (EMD) method. By using the coarse-grain process, the entropy of the envelope signals in the different scales was calculated to form the MSE distributions that represent the complexity of the signals. The decision tree was used to distinguish the entropy-related features which reveal the different classes of bearing faults.Entropy2016-08-10188Article10.3390/e180802922921099-43002016-08-10doi: 10.3390/e18080292Tian-Yau WuChang-Ling YuDa-Chun Liu<![CDATA[Entropy, Vol. 18, Pages 291: Indicators of Evidence for Bioequivalence]]>
http://www.mdpi.com/1099-4300/18/8/291
Some equivalence tests are based on two one-sided tests, where in many applications the test statistics are approximately normal. We define and find evidence for equivalence in Z-tests and then one- and two-sample binomial tests as well as for t-tests. Multivariate equivalence tests are typically based on statistics with non-central chi-squared or non-central F distributions in which the non-centrality parameter λ is a measure of heterogeneity of several groups. Classical tests of the null λ ≥ λ 0 versus the equivalence alternative λ &lt; λ 0 are available, but simple formulae for power functions are not. In these tests, the equivalence limit λ 0 is typically chosen by context. We provide extensions of classical variance stabilizing transformations for the non-central chi-squared and F distributions that are easy to implement and which lead to indicators of evidence for equivalence. Approximate power functions are also obtained via simple expressions for the expected evidence in these equivalence tests.Entropy2016-08-09188Article10.3390/e180802912911099-43002016-08-09doi: 10.3390/e18080291Stephan MorgenthalerRobert Staudte<![CDATA[Entropy, Vol. 18, Pages 290: Control of Self-Organized Criticality through Adaptive Behavior of Nano-Structured Thin Film Coatings]]>
http://www.mdpi.com/1099-4300/18/8/290
In this paper, we will develop a strategy for controlling the self-organized critical process using the example of extreme tribological conditions caused by intensive build-up edge (BUE) formation that take place during machining of hard-to-cut austentic superduplex stainless steel SDSS UNS32750. From a tribological viewpoint, machining of this material involves intensive seizure and build-up edge formation at the tool/chip interface, which can result in catastrophic tool failure. Built-up edge is considered to be a very damaging process in the system. The periodical breakage of the build-ups may eventually result in tool tip breakage and, thereby, lead to a catastrophe (complete loss of workability) in the system. The dynamic process of build-up edge formation is similar to an avalanche. It is governed by stick-slip phenomenon during friction and associated with the self-organized critical process. Investigation of wear patterns on the frictional surfaces of cutting tools using Scanning Electron Microscope (SEM), combined with chip undersurface characterization and frictional (cutting) force analyses, confirms this hypothesis. The control of self-organized criticality is accomplished through application of a nano-multilayer TiAl60CrSiYN/TiAlCrN thin film Physical Vapor Deposition (PVD) coating containing elevated aluminum content on a cemented carbide tool. The suggested coating enhanced the formation of protective nano-scale tribo-films on the friction surface under operation. Moreover, machining process optimization contributed to further enhancement of this beneficial process, as evidenced by X-ray Photoelectron Spectroscopy (XPS) studies of tribo-films. This resulted in a reduction of the scale of the build ups leading to overall wear performance improvement. A new thermodynamic analysis is proposed concerning entropy production during friction in machining with buildup edge formation. This model is able to predict various phenomena and shows a good agreement with experimental results. In the presented research we demonstrated a novel experimental approach for controlling self-organized criticality using an example of the machining with buildup edge formation, which is similar to avalanches. This was done through enhanced adaptive performance of the surface engineered tribo-system, in the aim of reducing the scale and frequency of the avalanches.Entropy2016-08-09188Article10.3390/e180802902901099-43002016-08-09doi: 10.3390/e18080290German Fox-RabinovichJose PaivaIosif GershmanMaryam ArameshDanielle CavelliKenji YamamotoGoulnara DosbaevaStephen Veldhuis<![CDATA[Entropy, Vol. 18, Pages 289: Correction to Yao, H.; Qiao, J.-W.; Gao, M.C.; Hawk, J.A.; Ma, S.-G.; Zhou, H. MoNbTaV Medium-Entropy Alloy. Entropy 2016, 18, 189]]>
http://www.mdpi.com/1099-4300/18/8/289
The authors wish to make the following correction to their paper [1].[...]Entropy2016-08-09188Correction10.3390/e180802892891099-43002016-08-09doi: 10.3390/e18080289Hongwei YaoJun-Wei QiaoMichael GaoJeffrey HawkSheng-Guo MaHefeng Zhou<![CDATA[Entropy, Vol. 18, Pages 287: Hawking-Like Radiation from the Trapping Horizon of Both Homogeneous and Inhomogeneous Spherically Symmetric Spacetime Model of the Universe]]>
http://www.mdpi.com/1099-4300/18/8/287
The present work deals with the semi-classical tunnelling approach and the Hamilton–Jacobi method to study Hawking radiation from the dynamical horizon of both the homogeneous Friedmann–Robertson–Walker (FRW) model and the inhomogeneous Lemaitre–Tolman–Bondi (LTB) model of the Universe. In the tunnelling prescription, radial null geodesics are used to visualize particles from behind the trapping horizon and the Hawking-like temperature has been calculated. On the other hand, in the Hamilton–Jacobi formulation, quantum corrections have been incorporated by solving the Klein–Gordon wave equation. In both the approaches, the temperature agrees at the semiclassical level.Entropy2016-08-08188Article10.3390/e180802872871099-43002016-08-08doi: 10.3390/e18080287Subenoy ChakrabortySubhajit SahaChristian Corda<![CDATA[Entropy, Vol. 18, Pages 288: Microstructures of Al7.5Cr22.5Fe35Mn20Ni15 High-Entropy Alloy and Its Polarization Behaviors in Sulfuric Acid, Nitric Acid and Hydrochloric Acid Solutions]]>
http://www.mdpi.com/1099-4300/18/8/288
This paper investigates the microstructures and the polarization behaviors of Al7.5Cr22.5Fe35Mn20Ni15 high-entropy alloy in 1M (1 mol/L) deaerated sulfuric acid (H2SO4), nitric acid (HNO3), and hydrochloric acid (HCl) solutions at temperatures of 30–60 °C. The three phases of the Al7.5Cr22.5Fe35Mn20Ni15 high-entropy alloy are body-centered cubic (BCC) dendrites, face-centered cubic (FCC) interdendrites, and ordered BCC precipitates uniformly dispersed in the BCC dendrites. The different phases were corroded in different acidic solutions. The passivation regions of the Al7.5Cr22.5Fe35Mn20Ni15 alloy are divided into three and two sub-regions in the solutions of H2SO4 and HNO3 at 30–60 °C, respectively. The passivation region of the Al7.5Cr22.5Fe35Mn20Ni15 alloy is also divided into two sub-domains in 1M deaerated HCl solution at 30 °C. The Al7.5Cr22.5Fe35Mn20Ni15 alloy has almost equal corrosion resistance in comparison with 304 stainless steel (304SS) in both the 1M H2SO4 and 1M HCl solutions. The polarization behaviors indicated that the Al7.5Cr22.5Fe35Mn20Ni15 alloy possessed much better corrosion resistance than 304SS in 1M HNO3 solution. However, in 1M NaCl solution, the corrosion resistance of the Al7.5Cr22.5Fe35Mn20Ni15 alloy was less than 304SS.Entropy2016-08-08188Article10.3390/e180802882881099-43002016-08-08doi: 10.3390/e18080288Chun-Huei TsauPo-Yen Lee<![CDATA[Entropy, Vol. 18, Pages 284: A Five Species Cyclically Dominant Evolutionary Game with Fixed Direction: A New Way to Produce Self-Organized Spatial Patterns]]>
http://www.mdpi.com/1099-4300/18/8/284
Cyclically dominant systems are hot issues in academia, and they play an important role in explaining biodiversity in Nature. In this paper, we construct a five-strategy cyclically dominant system. Each individual in our system changes its strategy along a fixed direction. The dominant strategy can promote a change in the dominated strategy, and the dominated strategy can block a change in the dominant strategy. We use mean-field theory and cellular automaton simulation to discuss the evolving characters of the system. In the cellular automaton simulation, we find the emergence of spiral waves on spatial patterns without a migration rate, which suggests a new way to produce self-organized spatial patterns.Entropy2016-08-08188Article10.3390/e180802842841099-43002016-08-08doi: 10.3390/e18080284Yibin KangQiuhui PanXueting WangMingfeng He<![CDATA[Entropy, Vol. 18, Pages 286: Parametric Analysis of the Exergoeconomic Operation Costs, Environmental and Human Toxicity Indexes of the MF501F3 Gas Turbine]]>
http://www.mdpi.com/1099-4300/18/8/286
This work presents an energetic, exergoeconomic, environmental, and toxicity analysis of the simple gas turbine M501F3 based on a parametric analysis of energetic (thermal efficiency, fuel and air flow rates, and specific work output), exergoeconomic (exergetic efficiency and exergoeconomic operation costs), environmental (global warming, smog formation, acid rain indexes), and human toxicity indexes, by taking the compressor pressure ratio and the turbine inlet temperature as the operating parameters. The aim of this paper is to provide an integral, systematic, and powerful diagnostic tool to establish possible operation and maintenance actions to improve the gas turbine’s exergoeconomic, environmental, and human toxicity indexes. Despite the continuous changes in the price of natural gas, the compressor, combustion chamber, and turbine always contribute 18.96%, 53.02%, and 28%, respectively, to the gas turbine’s exergoeconomic operation costs. The application of this methodology can be extended to other simple gas turbines using the pressure drops and isentropic efficiencies, among others, as the degradation parameters, as well as to other energetic systems, without loss of generality.Entropy2016-08-06188Article10.3390/e180802862861099-43002016-08-06doi: 10.3390/e18080286Edgar Torres-GonzálezRaul Lugo-LeyteHelen Lugo-MéndezMartin Salazar-PereyraAlejandro Torres-Aldaco<![CDATA[Entropy, Vol. 18, Pages 283: A Critical Reassessment of the Hess–Murray Law]]>
http://www.mdpi.com/1099-4300/18/8/283
The Hess–Murray law is a correlation between the radii of successive branchings in bi/trifurcated vessels in biological tissues. First proposed by the Swiss physiologist and Nobel laureate Walter Rudolf Hess in his 1914 doctoral thesis and published in 1917, the law was “rediscovered” by the American physiologist Cecil Dunmore Murray in 1926. The law is based on the assumption that blood or lymph circulation in living organisms is governed by a “work minimization” principle that—under a certain set of specified conditions—leads to an “optimal branching ratio” of r i + 1 r i = 1 2 3 = 0.7937 . This “cubic root of 2” correlation underwent extensive theoretical and experimental reassessment in the second half of the 20th century, and the results indicate that—under a well-defined series of conditions—the law is sufficiently accurate for the smallest vessels (r of the order of fractions of millimeter) but fails for the larger ones; moreover, it cannot be successfully extended to turbulent flows. Recent comparisons with numerical investigations of branched flows led to similar conclusions. More recently, the Hess–Murray law came back into the limelight when it was taken as a founding paradigm of the Constructal Law, a theory that employs physical intuition and mathematical reasoning to derive “optimal paths” for the transport of matter and energy between a source and a sink, regardless of the mode of transportation (continuous, like in convection and conduction, or discrete, like in the transportation of goods and people). This paper examines the foundation of the law and argues that both for natural flows and for engineering designs, a minimization of the irreversibility under physically sound boundary conditions leads to somewhat different results. It is also shown that, in the light of an exergy-based resource analysis, an amended version of the Hess–Murray law may still hold an important position in engineering and biological sciences.Entropy2016-08-05188Article10.3390/e180802832831099-43002016-08-05doi: 10.3390/e18080283Enrico Sciubba<![CDATA[Entropy, Vol. 18, Pages 285: ECG Classification Using Wavelet Packet Entropy and Random Forests]]>
http://www.mdpi.com/1099-4300/18/8/285
The electrocardiogram (ECG) is one of the most important techniques for heart disease diagnosis. Many traditional methodologies of feature extraction and classification have been widely applied to ECG analysis. However, the effectiveness and efficiency of such methodologies remain to be improved, and much existing research did not consider the separation of training and testing samples from the same set of patients (so called inter-patient scheme). To cope with these issues, in this paper, we propose a method to classify ECG signals using wavelet packet entropy (WPE) and random forests (RF) following the Association for the Advancement of Medical Instrumentation (AAMI) recommendations and the inter-patient scheme. Specifically, we firstly decompose the ECG signals by wavelet packet decomposition (WPD), and then calculate entropy from the decomposed coefficients as representative features, and finally use RF to build an ECG classification model. To the best of our knowledge, it is the first time that WPE and RF are used to classify ECG following the AAMI recommendations and the inter-patient scheme. Extensive experiments are conducted on the publicly available MIT–BIH Arrhythmia database and influence of mother wavelets and level of decomposition for WPD, type of entropy and the number of base learners in RF on the performance are also discussed. The experimental results are superior to those by several state-of-the-art competing methods, showing that WPE and RF is promising for ECG classification.Entropy2016-08-05188Article10.3390/e180802852851099-43002016-08-05doi: 10.3390/e18080285Taiyong LiMin Zhou<![CDATA[Entropy, Vol. 18, Pages 269: Traceability Analyses between Features and Assets in Software Product Lines]]>
http://www.mdpi.com/1099-4300/18/8/269
In a Software Product Line (SPL), the central notion of implementability provides the requisite connection between specifications and their implementations, leading to the definition of products. While it appears to be a simple extension of the traceability relation between components and features, it involves several subtle issues that were overlooked in the existing literature. In this paper, we have introduced a precise and formal definition of implementability over a fairly expressive traceability relation. The consequent definition of products in the given SPL naturally entails a set of useful analysis problems that are either refinements of known problems or are completely novel. We also propose a new approach to solve these analysis problems by encoding them as Quantified Boolean Formulae (QBF) and solving them through Quantified Satisfiability (QSAT) solvers. QBF can represent more complex analysis operations, which cannot be represented by using propositional formulae. The methodology scales much better than the SAT-based solutions hinted in the literature and were demonstrated through a tool called SPLAnE (SPL Analysis Engine) on a large set of SPL models.Entropy2016-08-03188Article10.3390/e180802692691099-43002016-08-03doi: 10.3390/e18080269Ganesh NarwaneJosé GalindoShankara KrishnaDavid BenavidesJean-Vivien MilloS. Ramesh<![CDATA[Entropy, Vol. 18, Pages 276: A Novel Image Encryption Scheme Using the Composite Discrete Chaotic System]]>
http://www.mdpi.com/1099-4300/18/8/276
The composite discrete chaotic system (CDCS) is a complex chaotic system that combines two or more discrete chaotic systems. This system holds the chaotic characteristics of different chaotic systems in a random way and has more complex chaotic behaviors. In this paper, we aim to provide a novel image encryption algorithm based on a new two-dimensional (2D) CDCS. The proposed scheme consists of two parts: firstly, we propose a new 2D CDCS and analysis the chaotic behaviors, then, we introduce the bit-level permutation and pixel-level diffusion encryption architecture with the new CDCS to form the full proposed algorithm. Random values and the total information of the plain image are added into the diffusion procedure to enhance the security of the proposed algorithm. Both the theoretical analysis and simulations confirm the security of the proposed algorithm.Entropy2016-08-01188Article10.3390/e180802762761099-43002016-08-01doi: 10.3390/e18080276Hegui ZhuXiangde ZhangHai YuCheng ZhaoZhiliang Zhu<![CDATA[Entropy, Vol. 18, Pages 282: How Is a Data-Driven Approach Better than Random Choice in Label Space Division for Multi-Label Classification?]]>
http://www.mdpi.com/1099-4300/18/8/282
We propose using five data-driven community detection approaches from social networks to partition the label space in the task of multi-label classification as an alternative to random partitioning into equal subsets as performed by RAkELd. We evaluate modularity-maximizing using fast greedy and leading eigenvector approximations, infomap, walktrap and label propagation algorithms. For this purpose, we propose to construct a label co-occurrence graph (both weighted and unweighted versions) based on training data and perform community detection to partition the label set. Then, each partition constitutes a label space for separate multi-label classification sub-problems. As a result, we obtain an ensemble of multi-label classifiers that jointly covers the whole label space. Based on the binary relevance and label powerset classification methods, we compare community detection methods to label space divisions against random baselines on 12 benchmark datasets over five evaluation measures. We discover that data-driven approaches are more efficient and more likely to outperform RAkELd than binary relevance or label powerset is, in every evaluated measure. For all measures, apart from Hamming loss, data-driven approaches are significantly better than RAkELd ( α = 0 . 05 ), and at least one data-driven approach is more likely to outperform RAkELd than a priori methods in the case of RAkELd’s best performance. This is the largest RAkELd evaluation published to date with 250 samplings per value for 10 values of RAkELd parameter k on 12 datasets published to date.Entropy2016-07-30188Article10.3390/e180802822821099-43002016-07-30doi: 10.3390/e18080282Piotr SzymańskiTomasz KajdanowiczKristian Kersting<![CDATA[Entropy, Vol. 18, Pages 281: Acoustic Detection of Coronary Occlusions before and after Stent Placement Using an Electronic Stethoscope]]>
http://www.mdpi.com/1099-4300/18/8/281
More than 370,000 Americans die every year from coronary artery disease (CAD). Early detection and treatment are crucial to reducing this number. Current diagnostic and disease-monitoring methods are invasive, costly, and time-consuming. Using an electronic stethoscope and spectral and nonlinear dynamics analysis of the recorded heart sound, we investigated the acoustic signature of CAD in subjects with only a single coronary occlusion before and after stent placement, as well as subjects with clinically normal coronary arteries. The CAD signature was evaluated by estimating power ratios of the total power above 150 Hz over the total power below 150 Hz of the FFT of the acoustic signal. Additionally, approximate entropy values were estimated to assess the differences induced by the stent placement procedure to the acoustic signature of the signals in the time domain. The groups were identified with this method with 82% sensitivity and 64% specificity (using the power ratio method) and 82% sensitivity and 55% specificity (using the approximate entropy). Power ratios and approximate entropy values after stent placement are not statistically different from those estimated from subjects with no coronary occlusions. Our approach demonstrates that the effect of stent placement on coronary occlusions can be monitored using an electronic stethoscope.Entropy2016-07-29188Article10.3390/e180802812811099-43002016-07-29doi: 10.3390/e18080281Andrei DragomirAllison PostYasemin AkayHani JneidDavid PaniaguaAli DenktasBiykem BozkurtMetin Akay<![CDATA[Entropy, Vol. 18, Pages 280: Acoustic Entropy of the Materials in the Course of Degradation]]>
http://www.mdpi.com/1099-4300/18/8/280
We report experimental observations on the evolution of acoustic entropy in the course of cyclic loading as degradation occurs due to fatigue. The measured entropy is a result of the materials’ microstructural changes that occur as degradation due to cyclic mechanical loading. Experimental results demonstrate that maximum acoustic entropy emanating from materials during the course of degradation remains similar. Experiments are shown for two different types of materials: Aluminum 6061 (a metallic alloy) and glass/epoxy (a composite laminate). The evolution of the acoustic entropy demonstrates a persistent trend over the course of degradation.Entropy2016-07-28188Article10.3390/e180802802801099-43002016-07-28doi: 10.3390/e18080280Ali KahirdehM. Khonsari<![CDATA[Entropy, Vol. 18, Pages 279: Entropy Generation through Non-Equilibrium Ordered Structures in Corner Flows with Sidewall Mass Injection]]>
http://www.mdpi.com/1099-4300/18/8/279
Additional entropy generation rates through non-equilibrium ordered structures are predicted for corner flows with sidewall mass injection. Well-defined non-equilibrium ordered structures are predicted at a normalized vertical station of approximately eighteen percent of the boundary-layer thickness. These structures are in addition to the ordered structures previously reported at approximately thirty-eight percent of the boundary layer thickness. The computational procedure is used to determine the entropy generation rate for each spectral velocity component at each of several stream wise stations and for each of several injection velocity values. Application of the procedure to possible thermal system processes is discussed. These results indicate that cooling sidewall mass injection into a horizontal laminar boundary layer may actually increase the heat transfer to the horizontal surface.Entropy2016-07-28188Article10.3390/e180802792791099-43002016-07-28doi: 10.3390/e18080279LaVar Isaacson<![CDATA[Entropy, Vol. 18, Pages 278: Expected Logarithm of Central Quadratic Form and Its Use in KL-Divergence of Some Distributions]]>
http://www.mdpi.com/1099-4300/18/8/278
In this paper, we develop three different methods for computing the expected logarithm of central quadratic forms: a series method, an integral method and a fast (but inexact) set of methods. The approach used for deriving the integral method is novel and can be used for computing the expected logarithm of other random variables. Furthermore, we derive expressions for the Kullback–Leibler (KL) divergence of elliptical gamma distributions and angular central Gaussian distributions, which turn out to be functions dependent on the expected logarithm of a central quadratic form. Through several experimental studies, we compare the performance of these methods.Entropy2016-07-28188Article10.3390/e180802782781099-43002016-07-28doi: 10.3390/e18080278Pourya Habib ZadehReshad Hosseini<![CDATA[Entropy, Vol. 18, Pages 277: A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models]]>
http://www.mdpi.com/1099-4300/18/8/277
Estimators derived from a divergence criterion such as φ - divergences are generally more robust than the maximum likelihood ones. We are interested in particular in the so-called minimum dual φ–divergence estimator (MDφDE), an estimator built using a dual representation of φ–divergences. We present in this paper an iterative proximal point algorithm that permits the calculation of such an estimator. The algorithm contains by construction the well-known Expectation Maximization (EM) algorithm. Our work is based on the paper of Tseng on the likelihood function. We provide some convergence properties by adapting the ideas of Tseng. We improve Tseng’s results by relaxing the identifiability condition on the proximal term, a condition which is not verified for most mixture models and is hard to be verified for “non mixture” ones. Convergence of the EM algorithm in a two-component Gaussian mixture is discussed in the spirit of our approach. Several experimental results on mixture models are provided to confirm the validity of the approach.Entropy2016-07-27188Article10.3390/e180802772771099-43002016-07-27doi: 10.3390/e18080277Diaa Al MohamadMichel Broniatowski<![CDATA[Entropy, Vol. 18, Pages 275: Symmetric Fractional Diffusion and Entropy Production]]>
http://www.mdpi.com/1099-4300/18/7/275
The discovery of the entropy production paradox (Hoffmann et al., 1998) raised basic questions about the nature of irreversibility in the regime between diffusion and waves. First studied in the form of spatial movements of moments of H functions, pseudo propagation is the pre-limit propagation-like movements of skewed probability density function (PDFs) in the domain between the wave and diffusion equations that goes over to classical partial differential equation propagation of characteristics in the wave limit. Many of the strange properties that occur in this extraordinary regime were thought to be connected in some manner to this form of proto-movement. This paper eliminates pseudo propagation by employing a similar evolution equation that imposes spatial unimodal symmetry on evolving PDFs. Contrary to initial expectations, familiar peculiarities emerge despite the imposed symmetry, but they have a distinct character.Entropy2016-07-23187Article10.3390/e180702752751099-43002016-07-23doi: 10.3390/e18070275Janett PrehlFrank BoldtKarl HoffmannChristopher Essex<![CDATA[Entropy, Vol. 18, Pages 273: Efficiency Bound of Local Z-Estimators on Discrete Sample Spaces]]>
http://www.mdpi.com/1099-4300/18/7/273
Many statistical models over a discrete sample space often face the computational difficulty of the normalization constant. Because of that, the maximum likelihood estimator does not work. In order to circumvent the computation difficulty, alternative estimators such as pseudo-likelihood and composite likelihood that require only a local computation over the sample space have been proposed. In this paper, we present a theoretical analysis of such localized estimators. The asymptotic variance of localized estimators depends on the neighborhood system on the sample space. We investigate the relation between the neighborhood system and estimation accuracy of localized estimators. Moreover, we derive the efficiency bound. The theoretical results are applied to investigate the statistical properties of existing estimators and some extended ones.Entropy2016-07-23187Article10.3390/e180702732731099-43002016-07-23doi: 10.3390/e18070273Takafumi Kanamori<![CDATA[Entropy, Vol. 18, Pages 271: Thermal Characteristic Analysis and Experimental Study of a Spindle-Bearing System]]>
http://www.mdpi.com/1099-4300/18/7/271
In this paper, a thermo-mechanical coupling analysis model of the spindle-bearing system based on Hertz’s contact theory and a point contact non-Newtonian thermal elastohydrodynamic lubrication (EHL) theory are developed. In this model, the effect of preload, centrifugal force, the gyroscopic moment, and the lubrication state of the spindle-bearing system are considered. According to the heat transfer theory, the mathematical model for the temperature field of the spindle system is developed and the effect of the spindle cooling system on the spindle temperature distribution is analyzed. The theoretical simulations and the experimental results indicate that the bearing preload has great effect on the frictional heat generation; the cooling fluid has great effect on the heat balance of the spindle system. If a steady-state heat balance between the friction heat generation and the cooling system cannot be reached, thermally-induced preload will lead to a further increase of the frictional heat generation and then cause the thermal failure of the spindle.Entropy2016-07-22187Article10.3390/e180702712711099-43002016-07-22doi: 10.3390/e18070271Li WuQingchang Tan<![CDATA[Entropy, Vol. 18, Pages 274: An Entropy-Based Kernel Learning Scheme toward Efficient Data Prediction in Cloud-Assisted Network Environments]]>
http://www.mdpi.com/1099-4300/18/7/274
With the recent emergence of wireless sensor networks (WSNs) in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and send data to sink node where end-users can query all the information and achieve cloud applications. Currently, one of the main disadvantages in the sensor nodes is that they are with limited physical performance relating to less memory for storage and less source of power. Therefore, in order to avoid such limitation, it is necessary to develop an efficient data prediction method in WSN. To serve this purpose, by reducing the redundant data transmission between sensor nodes and sink node while maintaining the required acceptable errors, this article proposes an entropy-based learning scheme for data prediction through the use of kernel least mean square (KLMS) algorithm. The proposed scheme called E-KLMS develops a mechanism to maintain the predicted data synchronous at both sides. Specifically, the kernel-based method is able to adjust the coefficients adaptively in accordance with every input, which will achieve a better performance with smaller prediction errors, while employing information entropy to remove these data which may cause relatively large errors. E-KLMS can effectively solve the tradeoff problem between prediction accuracy and computational efforts while greatly simplifying the training structure compared with some other data prediction approaches. What’s more, the kernel-based method and entropy technique could ensure the prediction effect by both improving the accuracy and reducing errors. Experiments with some real data sets have been carried out to validate the efficiency and effectiveness of E-KLMS learning scheme, and the experiment results show advantages of the our method in prediction accuracy and computational time.Entropy2016-07-22187Article10.3390/e180702742741099-43002016-07-22doi: 10.3390/e18070274Xiong LuoJi LiuDandan ZhangWeiping WangYueqin Zhu<![CDATA[Entropy, Vol. 18, Pages 270: Toward Improved Understanding of the Physical Meaning of Entropy in Classical Thermodynamics]]>
http://www.mdpi.com/1099-4300/18/7/270
The year 2015 marked the 150th anniversary of “entropy” as a concept in classical thermodynamics. Despite its central role in the mathematical formulation of the Second Law and most of classical thermodynamics, its physical meaning continues to be elusive and confusing. This is especially true when we seek a reconstruction of the classical thermodynamics of a system from the statistical behavior of its constituent microscopic particles or vice versa. This paper sketches the classical definition by Clausius and offers a modified mathematical definition that is intended to improve its conceptual meaning. In the modified version, the differential of specific entropy appears as a non-dimensional energy term that captures the invigoration or reduction of microscopic motion upon addition or withdrawal of heat from the system. It is also argued that heat transfer is a better model process to illustrate entropy; the canonical heat engines and refrigerators often used to illustrate this concept are not very relevant to new areas of thermodynamics (e.g., thermodynamics of biological systems). It is emphasized that entropy changes, as invoked in the Second Law, are necessarily related to the non-equilibrium interactions of two or more systems that might have initially been in thermal equilibrium but at different temperatures. The overall direction of entropy increase indicates the direction of naturally occurring heat transfer processes in an isolated system that consists of internally interacting (non-isolated) sub systems. We discuss the implication of the proposed modification on statements of the Second Law, interpretation of entropy in statistical thermodynamics, and the Third Law.Entropy2016-07-22187Article10.3390/e180702702701099-43002016-07-22doi: 10.3390/e18070270Ben Akih-Kumgeh<![CDATA[Entropy, Vol. 18, Pages 268: Mechanothermodynamic Entropy and Analysis of Damage State of Complex Systems]]>
http://www.mdpi.com/1099-4300/18/7/268
Mechanics from its side and thermodynamics from its side consider evolution of complex systems, including the Universe. Created classical thermodynamic theory of evolution has one important drawback since it predicts an inevitable heat death of the Universe which is unlikely to take place according to the modern perceptions. The attempts to create a generalized theory of evolution in mechanics were unsuccessful since mechanical equations do not discriminate between future and past. It is natural that the union of mechanics and thermodynamics was difficult to realize since they are based on different methodology. We make an attempt to propose a generalized theory of evolution which is based on the concept of tribo-fatigue entropy. Essence of the proposed approach is that tribo-fatigue entropy is determined by the processes of damageability conditioned by thermodynamic and mechanical effects causing to the change of states of any systems. Law of entropy increase is formulated analytically in the general form. Mechanothermodynamical function is constructed for specific case of fatigue damage of materials due to variation of temperature from 3 K to 0.8 of melting temperature basing on the analysis of 136 experimental results.Entropy2016-07-20187Article10.3390/e180702682681099-43002016-07-20doi: 10.3390/e18070268Leonid SosnovskiySergei Sherbakov<![CDATA[Entropy, Vol. 18, Pages 266: Complex Dynamics of a Continuous Bertrand Duopoly Game Model with Two-Stage Delay]]>
http://www.mdpi.com/1099-4300/18/7/266
This paper studies a continuous Bertrand duopoly game model with two-stage delay. Our aim is to investigate the influence of delay and weight on the complex dynamic characteristics of the system. We obtain the bifurcation point of the system respect to delay parameter by calculating. In addition, the dynamic properties of the system are simulated by power spectrum, attractor, bifurcation diagram, the largest Lyapunov exponent, 3D surface chart, 4D Cubic Chart, 2D parameter bifurcation diagram, and 3D parameter bifurcation diagram. The results show that the stability of the system depends on the delay and weight, in order to maintain stability of price and ensure the firm profit, the firms must control the parameters in the reasonable region. Otherwise, the system will lose stability, and even into chaos, which will cause fluctuations in prices, the firms cannot be profitable. Finally, the chaos control of the system is carried out by a control strategy of the state variables’ feedback and parameter variation, which effectively avoid the damage of chaos to the economic system. Therefore, the results of this study have an important practical significance to make decisions with multi-stage delay for oligopoly firms.Entropy2016-07-20187Article10.3390/e180702662661099-43002016-07-20doi: 10.3390/e18070266Junhai MaFengshan Si<![CDATA[Entropy, Vol. 18, Pages 267: Novel Criteria for Deterministic Remote State Preparation via the Entangled Six-Qubit State]]>
http://www.mdpi.com/1099-4300/18/7/267
In this paper, our concern is to design some criteria for deterministic remote state preparation for preparing an arbitrary three-particle state via a genuinely entangled six-qubit state. First, we put forward two schemes in both the real and complex Hilbert space, respectively. Using an appropriate set of eight-qubit measurement basis, the remote three-qubit preparation is completed with unit success probability. Departing from previous research, our protocol has a salient feature in that the serviceable measurement basis only contains the initial coefficients and their conjugate values. By utilizing the permutation group, it is convenient to provide the permutation relationship between coefficients. Second, our ideas and methods can also be generalized to the situation of preparing an arbitrary N-particle state in complex case by taking advantage of Bell states as quantum resources. More importantly, criteria satisfied conditions for preparation with 100% success probability in complex Hilbert space is summarized. Third, the classical communication costs of our scheme are calculated to determine the classical recourses required. It is also worth mentioning that our protocol has higher efficiency and lower resource costs compared with the other papers.Entropy2016-07-20187Article10.3390/e180702672671099-43002016-07-20doi: 10.3390/e18070267Gang XuXiu-Bo ChenZhao DouJing LiXin LiuZongpeng Li<![CDATA[Entropy, Vol. 18, Pages 264: The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas]]>
http://www.mdpi.com/1099-4300/18/7/264
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004), and we also show that each copula in that class is a maximum entropy copula.Entropy2016-07-19187Article10.3390/e180702642641099-43002016-07-19doi: 10.3390/e18070264Jesús GarcíaVerónica González-LópezRoger Nelsen<![CDATA[Entropy, Vol. 18, Pages 265: Noise Suppression in 94 GHz Radar-Detected Speech Based on Perceptual Wavelet Packet]]>
http://www.mdpi.com/1099-4300/18/7/265
A millimeter wave (MMW) radar sensor is employed in our laboratory to detect human speech because it provides a new non-contact speech acquisition method that is suitable for various applications. However, the speech detected by the radar sensor is often degraded by combined noise. This paper proposes a new perceptual wavelet packet method that is able to enhance the speech acquired using a 94 GHz MMW radar system by suppressing the noise. The process is as follows. First, the radar speech signal is decomposed using a perceptual wavelet packet. Then, an adaptive wavelet threshold and new modified thresholding function are employed to remove the noise from the detected speech. The results obtained from the speech spectrograms, listening tests and objective evaluation show that the new method significantly improves the performance of the detected speech.Entropy2016-07-19187Article10.3390/e180702652651099-43002016-07-19doi: 10.3390/e18070265Fuming ChenChuantao LiQiang AnFulai LiangFugui QiSheng LiJianqi Wang<![CDATA[Entropy, Vol. 18, Pages 263: Positive Sofic Entropy Implies Finite Stabilizer]]>
http://www.mdpi.com/1099-4300/18/7/263
We prove that, for a measure preserving action of a sofic group with positive sofic entropy, the stabilizer is finite on a set of positive measures. This extends the results of Weiss and Seward for amenable groups and free groups, respectively. It follows that the action of a sofic group on its subgroups by inner automorphisms has zero topological sofic entropy, and that a faithful action that has completely positive sofic entropy must be free.Entropy2016-07-18187Article10.3390/e180702632631099-43002016-07-18doi: 10.3390/e18070263Tom Meyerovitch<![CDATA[Entropy, Vol. 18, Pages 262: Greedy Algorithms for Optimal Distribution Approximation]]>
http://www.mdpi.com/1099-4300/18/7/262
The approximation of a discrete probability distribution t by an M-type distribution p is considered. The approximation error is measured by the informational divergence D ( t ∥ p ) , which is an appropriate measure, e.g., in the context of data compression. Properties of the optimal approximation are derived and bounds on the approximation error are presented, which are asymptotically tight. A greedy algorithm is proposed that solves this M-type approximation problem optimally. Finally, it is shown that different instantiations of this algorithm minimize the informational divergence D ( p ∥ t ) or the variational distance ∥ p − t ∥ 1 .Entropy2016-07-18187Article10.3390/e180702622621099-43002016-07-18doi: 10.3390/e18070262Bernhard GeigerGeorg Böcherer<![CDATA[Entropy, Vol. 18, Pages 252: Three Strategies for the Design of Advanced High-Entropy Alloys]]>
http://www.mdpi.com/1099-4300/18/7/252
High-entropy alloys (HEAs) have recently become a vibrant field of study in the metallic materials area. In the early years, the design of HEAs was more of an exploratory nature. The selection of compositions was somewhat arbitrary, and there was typically no specific goal to be achieved in the design. Very recently, however, the development of HEAs has gradually entered a different stage. Unlike the early alloys, HEAs developed nowadays are usually designed to meet clear goals, and have carefully chosen components, deliberately introduced multiple phases, and tailored microstructures. These alloys are referred to as advanced HEAs. In this paper, the progress in advanced HEAs is briefly reviewed. The design strategies for these materials are examined and are classified into three categories. Representative works in each category are presented. Finally, important issues and future directions in the development of advanced HEAs are pointed out and discussed.Entropy2016-07-15187Review10.3390/e180702522521099-43002016-07-15doi: 10.3390/e18070252Ming-Hung Tsai<![CDATA[Entropy, Vol. 18, Pages 255: Coupled Thermoelectric Devices: Theory and Experiment]]>
http://www.mdpi.com/1099-4300/18/7/255
In this paper, we address theoretically and experimentally the optimization problem of the heat transfer occurring in two coupled thermoelectric devices. A simple experimental set up is used. The optimization parameters are the applied electric currents. When one thermoelectric is analysed, the temperature difference Δ T between the thermoelectric boundaries shows a parabolic profile with respect to the applied electric current. This behaviour agrees qualitatively with the corresponding experimental measurement. The global entropy generation shows a monotonous increase with the electric current. In the case of two coupled thermoelectric devices, elliptic isocontours for Δ T are obtained in applying an electric current through each of the thermoelectrics. The isocontours also fit well with measurements. Optimal figure of merit is found for a specific set of values of the applied electric currents. The entropy generation-thermal figure of merit relationship is studied. It is shown that, given a value of the thermal figure of merit, the device can be operated in a state of minimum entropy production.Entropy2016-07-14187Article10.3390/e180702552551099-43002016-07-14doi: 10.3390/e18070255Jaziel RojasIván RiveraAldo FigueroaFederico Vázquez<![CDATA[Entropy, Vol. 18, Pages 260: Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices]]>
http://www.mdpi.com/1099-4300/18/7/260
Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.Entropy2016-07-14187Article10.3390/e180702602601099-43002016-07-14doi: 10.3390/e18070260Luis BonillaManuel Carretero<![CDATA[Entropy, Vol. 18, Pages 257: Using Wearable Accelerometers in a Community Service Context to Categorize Falling Behavior]]>
http://www.mdpi.com/1099-4300/18/7/257
In this paper, the Multiscale Entropy (MSE) analysis of acceleration data collected from a wearable inertial sensor was compared with other features reported in the literature to observe falling behavior from the acceleration data, and traditional clinical scales to evaluate falling behavior. We use a fall risk assessment over a four-month period to examine &gt;65 year old participants in a community service context using simple clinical tests, including the Short Form Berg Balance Scale (SFBBS), Timed Up and Go test (TUG), and the Short Portable Mental Status Questionnaire (SPMSQ), with wearable accelerometers for the TUG test. We classified participants into fallers and non-fallers to (1) compare the features extracted from the accelerometers and (2) categorize fall risk using statistics from TUG test results. Combined, TUG and SFBBS results revealed defining features were test time, Slope(A) and slope(B) in Sit(A)-to-stand(B), and range(A) and slope(B) in Stand(B)-to-sit(A). Of (1) SPMSQ; (2) TUG and SPMSQ; and (3) BBS and SPMSQ results, only range(A) in Stand(B)-to-sit(A) was a defining feature. From MSE indicators, we found that whether in the X, Y or Z direction, TUG, BBS, and the combined TUG and SFBBS are all distinguishable, showing that MSE can effectively classify participants in these clinical tests using behavioral actions. This study highlights the advantages of body-worn sensors as ordinary and low cost tools available outside the laboratory. The results indicated that MSE analysis of acceleration data can be used as an effective metric to categorize falling behavior of community-dwelling elderly. In addition to clinical application, (1) our approach requires no expert physical therapist, nurse, or doctor for evaluations and (2) fallers can be categorized irrespective of the critical value from clinical tests.Entropy2016-07-13187Article10.3390/e180702572571099-43002016-07-13doi: 10.3390/e18070257Chia-Hsuan LeeTien-Lung SunBernard JiangVictor Choi<![CDATA[Entropy, Vol. 18, Pages 258: Structures in Sound: Analysis of Classical Music Using the Information Length]]>
http://www.mdpi.com/1099-4300/18/7/258
We show that music is represented by fluctuations away from the minimum path through statistical space. Our key idea is to envision music as the evolution of a non-equilibrium system and to construct probability distribution functions (PDFs) from musical instrument digital interface (MIDI) files of classical compositions. Classical music is then viewed through the lens of generalized position and velocity, based on the Fisher metric. Through these statistical tools we discuss a way to quantitatively discriminate between music and noise.Entropy2016-07-13187Article10.3390/e180702582581099-43002016-07-13doi: 10.3390/e18070258Schuyler NicholsonEun-jin Kim<![CDATA[Entropy, Vol. 18, Pages 256: The Logical Consistency of Simultaneous Agnostic Hypothesis Tests]]>
http://www.mdpi.com/1099-4300/18/7/256
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical requirements. However, Izbicki and Esteves shows that the only tests that are in accordance with three logical requirements (monotonicity, invertibility and consonance) are trivial tests based on point estimation, which generally lack statistical optimality. As a possible solution to this dilemma, this paper adapts the above logical requirements to agnostic tests, in which one can accept, reject or remain agnostic with respect to a given hypothesis. Each of the logical requirements is characterized in terms of a Bayesian decision theoretic perspective. Contrary to the results obtained for regular hypothesis tests, there exist agnostic tests that satisfy all logical requirements and also perform well statistically. In particular, agnostic tests that fulfill all logical requirements are characterized as region estimator-based tests. Examples of such tests are provided.Entropy2016-07-13187Article10.3390/e180702562561099-43002016-07-13doi: 10.3390/e18070256Luís EstevesRafael IzbickiJulio SternRafael Stern<![CDATA[Entropy, Vol. 18, Pages 259: Ensemble Equivalence for Distinguishable Particles]]>
http://www.mdpi.com/1099-4300/18/7/259
Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive.Entropy2016-07-13187Article10.3390/e180702592591099-43002016-07-13doi: 10.3390/e18070259Antonio Fernández-PeraltaRaúl Toral<![CDATA[Entropy, Vol. 18, Pages 254: Link between Lie Group Statistical Mechanics and Thermodynamics of Continua]]>
http://www.mdpi.com/1099-4300/18/7/254
In this work, we consider the value of the momentum map of the symplectic mechanics as an affine tensor called momentum tensor. From this point of view, we analyze the underlying geometric structure of the theories of Lie group statistical mechanics and relativistic thermodynamics of continua, formulated by Souriau independently of each other. We bridge the gap between them in the classical Galilean context. These geometric structures of the thermodynamics are rich and we think they might be a source of inspiration for the geometric theory of information based on the concept of entropy.Entropy2016-07-12187Article10.3390/e180702542541099-43002016-07-12doi: 10.3390/e18070254Géry de Saxcé<![CDATA[Entropy, Vol. 18, Pages 253: The Use of Denoising and Analysis of the Acoustic Signal Entropy in Diagnosing Engine Valve Clearance]]>
http://www.mdpi.com/1099-4300/18/7/253
The paper presents a method for processing acoustic signals which allows the extraction, from a very noisy signal, of components which contain diagnostically useful information on the increased valve clearance of a combustion engine. This method used two-stage denoising of the acoustic signal performed by means of a discrete wavelet transform. Afterwards, based on the signal cleaned-up in this manner, its entropy was calculated as a quantitative measure of qualitative changes caused by the excessive clearance. The testing and processing of the actual acoustic signal of a combustion engine enabled clear extraction of components which contain information on the valve clearance being diagnosed.Entropy2016-07-12187Article10.3390/e180702532531099-43002016-07-12doi: 10.3390/e18070253Tomasz FiglusJozef GnapTomáš SkrúcanýBranislav ŠarkanJozef Stoklosa<![CDATA[Entropy, Vol. 18, Pages 246: Effect of a Percutaneous Coronary Intervention Procedure on Heart Rate Variability and Pulse Transit Time Variability: A Comparison Study Based on Fuzzy Measure Entropy]]>
http://www.mdpi.com/1099-4300/18/7/246
Percutaneous coronary intervention (PCI) is a common treatment method for patients with coronary artery disease (CAD), but its effect on synchronously measured heart rate variability (HRV) and pulse transit time variability (PTTV) have not been well established. This study aimed to verify whether PCI for CAD patients affects both HRV and PTTV parameters. Sixteen CAD patients were enrolled. Two five-minute ECG and finger photoplethysmography (PPG) signals were recorded, one within 24 h before PCI and another within 24 h after PCI. The changes of RR and pulse transit time (PTT) intervals due to the PCI procedure were first compared. Then, HRV and PTTV were evaluated by a standard short-term time-domain variability index of standard deviation of time series (SDTS) and our previously developed entropy-based index of fuzzy measure entropy (FuzzyMEn). To test the effect of different time series length on HRV and PTTV results, we segmented the RR and PTT time series using four time windows of 200, 100, 50 and 25 beats respectively. The PCI-induced changes in HRV and PTTV, as well as in RR and PTT intervals, are different. PCI procedure significantly decreased RR intervals (before PCI 973 ± 85 vs. after PCI 907 ± 100 ms, p &lt; 0.05) while significantly increasing PTT intervals (207 ± 18 vs. 214 ± 19 ms, p &lt; 0.01). For HRV, SDTS-only output significant lower values after PCI when time windows are 100 and 25 beats while presenting no significant decreases for other two time windows. By contrast, FuzzyMEn gave significant lower values after PCI for all four time windows (all p &lt; 0.05). For PTTV, SDTS hardly changed after PCI at any time window (all p &gt; 0.90) whereas FuzzyMEn still reported significant lower values (p &lt; 0.05 for 25 beats time window and p &lt; 0.01 for other three time windows). For both HRV and PTTV, with the increase of time window values, SDTS decreased while FuzzyMEn increased. This pilot study demonstrated that the RR interval decreased whereas the PTT interval increased after the PCI procedure and that there were significant reductions in both HRV and PTTV immediately after PCI using the FuzzyMEn method, indicating the changes in underlying mechanisms in cardiovascular system.Entropy2016-07-09187Article10.3390/e180702462461099-43002016-07-09doi: 10.3390/e18070246Guang ZhangChengyu LiuLizhen JiJing YangChangchun Liu<![CDATA[Entropy, Vol. 18, Pages 251: Maximum Entropy Learning with Deep Belief Networks]]>
http://www.mdpi.com/1099-4300/18/7/251
Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN). We present a maximum entropy (ME) learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited.Entropy2016-07-08187Article10.3390/e180702512511099-43002016-07-08doi: 10.3390/e18070251Payton LinSzu-Wei FuSyu-Siang WangYing-Hui LaiYu Tsao<![CDATA[Entropy, Vol. 18, Pages 249: Modeling Fluid’s Dynamics with Master Equations in Ultrametric Spaces Representing the Treelike Structure of Capillary Networks]]>
http://www.mdpi.com/1099-4300/18/7/249
We present a new conceptual approach for modeling of fluid flows in random porous media based on explicit exploration of the treelike geometry of complex capillary networks. Such patterns can be represented mathematically as ultrametric spaces and the dynamics of fluids by ultrametric diffusion. The images of p-adic fields, extracted from the real multiscale rock samples and from some reference images, are depicted. In this model the porous background is treated as the environment contributing to the coefficients of evolutionary equations. For the simplest trees, these equations are essentially less complicated than those with fractional differential operators which are commonly applied in geological studies looking for some fractional analogs to conventional Euclidean space but with anomalous scaling and diffusion properties. It is possible to solve the former equation analytically and, in particular, to find stationary solutions. The main aim of this paper is to attract the attention of researchers working on modeling of geological processes to the novel utrametric approach and to show some examples from the petroleum reservoir static and dynamic characterization, able to integrate the p-adic approach with multifractals, thermodynamics and scaling. We also present a non-mathematician friendly review of trees and ultrametric spaces and pseudo-differential operators on such spaces.Entropy2016-07-07187Article10.3390/e180702492491099-43002016-07-07doi: 10.3390/e18070249Andrei KhrennikovKlaudia OleschkoMaría Correa López<![CDATA[Entropy, Vol. 18, Pages 250: Thermoeconomic Coherence: A Methodology for the Analysis and Optimisation of Thermal Systems]]>
http://www.mdpi.com/1099-4300/18/7/250
In the field of thermal systems, different approaches and methodologies have been proposed to merge thermodynamics and economics. They are usually referred as thermoeconomic methodologies and their objective is to find the optimum design of the thermal system given a specific objective function. Some thermoeconomic analyses go beyond that objective and attempt to find whether every component of the system is correctly designed or to quantify the inefficiencies of the components in economic terms. This paper takes another step in that direction and presents a new methodology to measure the thermoeconomic coherence of thermal systems, as well as the contribution of each parameter of the system to that coherence. It is based on the equality of marginal costs in the optimum. The methodology establishes a criterion to design coherently the system. Additionally, it may be used to evaluate how much a specific design is far from the optimum, which components are undersized or oversized and to measure the strength of the restrictions of the system. Finally, it may be extended to the analysis of uncertainties of the design process, providing a coherent design and sizing of the components with high uncertainties.Entropy2016-07-05187Article10.3390/e180702502501099-43002016-07-05doi: 10.3390/e18070250Antonio RoviraJosé Martínez-ValManuel Valdés<![CDATA[Entropy, Vol. 18, Pages 248: Cumulative Paired φ-Entropy]]>
http://www.mdpi.com/1099-4300/18/7/248
A new kind of entropy will be introduced which generalizes both the differential entropy and the cumulative (residual) entropy. The generalization is twofold. First, we simultaneously define the entropy for cumulative distribution functions (cdfs) and survivor functions (sfs), instead of defining it separately for densities, cdfs, or sfs. Secondly, we consider a general “entropy generating function” φ, the same way Burbea et al. (IEEE Trans. Inf. Theory 1982, 28, 489–495) and Liese et al. (Convex Statistical Distances; Teubner-Verlag, 1987) did in the context of φ-divergences. Combining the ideas of φ-entropy and cumulative entropy leads to the new “cumulative paired φ-entropy” ( C P E φ ). This new entropy has already been discussed in at least four scientific disciplines, be it with certain modifications or simplifications. In the fuzzy set theory, for example, cumulative paired φ-entropies were defined for membership functions, whereas in uncertainty and reliability theories some variations of C P E φ were recently considered as measures of information. With a single exception, the discussions in the scientific disciplines appear to be held independently of each other. We consider C P E φ for continuous cdfs and show that C P E φ is rather a measure of dispersion than a measure of information. In the first place, this will be demonstrated by deriving an upper bound which is determined by the standard deviation and by solving the maximum entropy problem under the restriction of a fixed variance. Next, this paper specifically shows that C P E φ satisfies the axioms of a dispersion measure. The corresponding dispersion functional can easily be estimated by an L-estimator, containing all its known asymptotic properties. C P E φ is the basis for several related concepts like mutual φ-information, φ-correlation, and φ-regression, which generalize Gini correlation and Gini regression. In addition, linear rank tests for scale that are based on the new entropy have been developed. We show that almost all known linear rank tests are special cases, and we introduce certain new tests. Moreover, formulas for different distributions and entropy calculations are presented for C P E φ if the cdf is available in a closed form.Entropy2016-07-01187Article10.3390/e180702482481099-43002016-07-01doi: 10.3390/e18070248Ingo KleinBenedikt MangoldMonika Doll<![CDATA[Entropy, Vol. 18, Pages 247: Entropy? Honest!]]>
http://www.mdpi.com/1099-4300/18/7/247
Here we deconstruct, and then in a reasoned way reconstruct, the concept of “entropy of a system”, paying particular attention to where the randomness may be coming from. We start with the core concept of entropy as a count associated with a description; this count (traditionally expressed in logarithmic form for a number of good reasons) is in essence the number of possibilities—specific instances or “scenarios”—that match that description. Very natural (and virtually inescapable) generalizations of the idea of description are the probability distribution and its quantum mechanical counterpart, the density operator. We track the process of dynamically updating entropy as a system evolves. Three factors may cause entropy to change: (1) the system’s internal dynamics; (2) unsolicited external influences on it; and (3) the approximations one has to make when one tries to predict the system’s future state. The latter task is usually hampered by hard-to-quantify aspects of the original description, limited data storage and processing resource, and possibly algorithmic inadequacy. Factors 2 and 3 introduce randomness—often huge amounts of it—into one’s predictions and accordingly degrade them. When forecasting, as long as the entropy bookkeping is conducted in an honest fashion, this degradation will always lead to an entropy increase. To clarify the above point we introduce the notion of honest entropy, which coalesces much of what is of course already done, often tacitly, in responsible entropy-bookkeping practice. This notion—we believe—will help to fill an expressivity gap in scientific discourse. With its help, we shall prove that any dynamical system—not just our physical universe—strictly obeys Clausius’s original formulation of the second law of thermodynamics if and only if it is invertible. Thus this law is a tautological property of invertible systems!Entropy2016-06-30187Review10.3390/e180702472471099-43002016-06-30doi: 10.3390/e18070247Tommaso Toffoli<![CDATA[Entropy, Vol. 18, Pages 244: Multiatom Quantum Coherences in Micromasers as Fuel for Thermal and Nonthermal Machines]]>
http://www.mdpi.com/1099-4300/18/7/244
In this paper, we address the question: To what extent is the quantum state preparation of multiatom clusters (before they are injected into the microwave cavity) instrumental for determining not only the kind of machine we may operate, but also the quantitative bounds of its performance? Figuratively speaking, if the multiatom cluster is the “crude oil”, the question is: Which preparation of the cluster is the refining process that can deliver a “gasoline” with a “specific octane”? We classify coherences or quantum correlations among the atoms according to their ability to serve as: (i) fuel for nonthermal machines corresponding to atomic states whose coherences displace or squeeze the cavity field, as well as cause its heating; and (ii) fuel that is purely “combustible”, i.e., corresponds to atomic states that only allow for heat and entropy exchange with the field and can energize a proper heat engine. We identify highly promising multiatom states for each kind of fuel and propose viable experimental schemes for their implementation.Entropy2016-06-29187Article10.3390/e180702442441099-43002016-06-29doi: 10.3390/e18070244Ceren DağWolfgang NiedenzuÖzgür MüstecaplıoğluGershon Kurizki<![CDATA[Entropy, Vol. 18, Pages 245: Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map]]>
http://www.mdpi.com/1099-4300/18/7/245
Multiple description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform) domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.Entropy2016-06-29187Article10.3390/e180702452451099-43002016-06-29doi: 10.3390/e18070245Sen HanHuihui BaiMengmeng Zhang<![CDATA[Entropy, Vol. 18, Pages 243: Nonlinear Thermodynamic Analysis and Optimization of a Carnot Engine Cycle]]>
http://www.mdpi.com/1099-4300/18/7/243
As part of the efforts to unify the various branches of Irreversible Thermodynamics, the proposed work reconsiders the approach of the Carnot engine taking into account the finite physical dimensions (heat transfer conductances) and the finite speed of the piston. The models introduce the irreversibility of the engine by two methods involving different constraints. The first method introduces the irreversibility by a so-called irreversibility ratio in the entropy balance applied to the cycle, while in the second method it is emphasized by the entropy generation rate. Various forms of heat transfer laws are analyzed, but most of the results are given for the case of the linear law. Also, individual cases are studied and reported in order to provide a simple analytical form of the results. The engine model developed allowed a formal optimization using the calculus of variations.Entropy2016-06-28187Article10.3390/e180702432431099-43002016-06-28doi: 10.3390/e18070243Michel FeidtMonica CosteaStoian PetrescuCamelia Stanciu<![CDATA[Entropy, Vol. 18, Pages 242: Fast EEMD Based AM-Correntropy Matrix and Its Application on Roller Bearing Fault Diagnosis]]>
http://www.mdpi.com/1099-4300/18/7/242
Roller bearing plays a significant role in industrial sectors. To improve the ability of roller bearing fault diagnosis under multi-rotating situation, this paper proposes a novel roller bearing fault characteristic: the Amplitude Modulation (AM) based correntropy extracted from the Intrinsic Mode Functions (IMFs), which are decomposed by Fast Ensemble Empirical mode decomposition (FEEMD) and employ Least Square Support Vector Machine (LSSVM) to implement intelligent fault identification. Firstly, the roller bearing vibration acceleration signal is decomposed by FEEMD to extract IMFs. Secondly, IMF correntropy matrix (IMFCM) as the fault feature matrix is calculated from the AM-correntropy model of the primary vibration signal and IMFs. Furthermore, depending on LSSVM, the fault identification results of the roller bearing are obtained. Through the bearing identification experiments in stationary rotating conditions, it was verified that IMFCM generates more stable and higher diagnosis accuracy than conventional fault features such as energy moment, fuzzy entropy, and spectral kurtosis. Additionally, it proves that IMFCM has more diagnosis robustness than conventional fault features under cross-mixed roller bearing operating conditions. The diagnosis accuracy was more than 84% for the cross-mixed operating condition, which is much higher than the traditional features. In conclusion, it was proven that FEEMD-IMFCM-LSSVM is a reliable technology for roller bearing fault diagnosis under the constant or multi-positioned operating conditions, and as such, it possesses potential prospects for a broad application of uses.Entropy2016-06-28187Article10.3390/e180702422421099-43002016-06-28doi: 10.3390/e18070242Yunxiao FuLimin JiaYong QinJie YangDing Fu<![CDATA[Entropy, Vol. 18, Pages 241: A Simulation-Based Study on Bayesian Estimators for the Skew Brownian Motion]]>
http://www.mdpi.com/1099-4300/18/7/241
In analyzing a temporal data set from a continuous variable, diffusion processes can be suitable under certain conditions, depending on the distribution of increments. We are interested in processes where a semi-permeable barrier splits the state space, producing a skewed diffusion that can have different rates on each side. In this work, the asymptotic behavior of some Bayesian inferences for this class of processes is discussed and validated through simulations. As an application, we model the location of South American sea lions (Otaria flavescens) on the coast of Calbuco, southern Chile, which can be used to understand how the foraging behavior of apex predators varies temporally and spatially.Entropy2016-06-28187Article10.3390/e180702412411099-43002016-06-28doi: 10.3390/e18070241Manuel BarahonaLaura RifoMaritza SepúlvedaSoledad Torres<![CDATA[Entropy, Vol. 18, Pages 238: Strong Secrecy Capacity of a Class of Wiretap Networks]]>
http://www.mdpi.com/1099-4300/18/7/238
This paper considers a special class of wiretap networks with a single source node and K sink nodes. The source message is encoded into a binary digital sequence of length N, divided into K subsequences, and sent to the K sink nodes respectively through noiseless channels. The legitimate receivers are able to obtain subsequences from arbitrary μ 1 = K α 1 sink nodes. Meanwhile, there exist eavesdroppers who are able to observe subsequences from arbitrary μ 2 = K α 2 sink nodes, where 0 ≤ α 2 &lt; α 1 ≤ 1 . The goal is to let the receivers be able to recover the source message with a vanishing decoding error probability, and keep the eavesdroppers ignorant about the source message. It is clear that the communication model is an extension of wiretap channel II. Secrecy capacity with respect to the strong secrecy criterion is established. In the proof of the direct part, a codebook is generated by a randomized scheme and partitioned by Csiszár’s almost independent coloring scheme. Unlike the linear network coding schemes, our coding scheme is working on the binary field and hence independent of the scale of the network.Entropy2016-06-24187Article10.3390/e180702382381099-43002016-06-24doi: 10.3390/e18070238Dan HeWangmei Guo<![CDATA[Entropy, Vol. 18, Pages 239: Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation]]>
http://www.mdpi.com/1099-4300/18/7/239
The minimum error entropy (MEE) algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error) and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.Entropy2016-06-24187Article10.3390/e180702392391099-43002016-06-24doi: 10.3390/e18070239Namyong KimKihyeon Kwon<![CDATA[Entropy, Vol. 18, Pages 240: When Is an Area Law Not an Area Law?]]>
http://www.mdpi.com/1099-4300/18/7/240
Entanglement entropy is typically proportional to area, but sometimes it acquires an additional logarithmic pre-factor. We offer some intuitive explanations for these facts.Entropy2016-06-24187Article10.3390/e180702402401099-43002016-06-24doi: 10.3390/e18070240Anushya ChandranChris LaumannRafael Sorkin<![CDATA[Entropy, Vol. 18, Pages 237: Thermodynamic Analysis of Resources Used in Thermal Spray Processes: Energy and Exergy Methods]]>
http://www.mdpi.com/1099-4300/18/7/237
In manufacturing, thermal spray technology encompasses a group of coating processes that provide functional surfaces to improve the performance of the components and protect them from corrosion, wear, heat and other failings. Many types and forms of feedstock can be thermal sprayed, and each requires different process conditions and life cycle preparations. The required thermal energy is generated by a chemical (combustion) or electrical (plasma/or arc) energy source. Due to high inefficiencies associated with energy and material consumption in this process, a comprehensive resources used analysis for a sustainable improvement has always been promising. This study aims to identify and compare the influence of using different forms of feedstock (powder, suspension) as well as energy sources (combustion, plasma) on efficiency and effectiveness of energy conversion and resources consumption for different thermal spray processes based on energy and exergy analysis. Exergy destruction ratio and effectiveness efficiency are used to evaluate the energy conversion efficiency. The degree of perfection and degree of energy ratio are applied to account for the intensity of resources consumption (energy or material) in thermal spray processes. It is indicated that high velocity suspension flame spray has the lowest effectiveness efficiency and the highest exergy destruction compared to other thermal spray processes. For resource accounting purposes, in general, suspension thermal spray showed the lower degree of perfection and accordingly the higher inefficiency of resources used compared to powder thermal spray.Entropy2016-06-24187Article10.3390/e180702372371099-43002016-06-24doi: 10.3390/e18070237Kamran TaheriMohamed ElhorinyMartin PlachettaRainer Gadow<![CDATA[Entropy, Vol. 18, Pages 236: Generalisations of Fisher Matrices]]>
http://www.mdpi.com/1099-4300/18/6/236
Fisher matrices play an important role in experimental design and in data analysis. Their primary role is to make predictions for the inference of model parameters—both their errors and covariances. In this short review, I outline a number of extensions to the simple Fisher matrix formalism, covering a number of recent developments in the field. These are: (a) situations where the data (in the form of ( x , y ) pairs) have errors in both x and y; (b) modifications to parameter inference in the presence of systematic errors, or through fixing the values of some model parameters; (c) Derivative Approximation for LIkelihoods (DALI) - higher-order expansions of the likelihood surface, going beyond the Gaussian shape approximation; (d) extensions of the Fisher-like formalism, to treat model selection problems with Bayesian evidence.Entropy2016-06-22186Review10.3390/e180602362361099-43002016-06-22doi: 10.3390/e18060236Alan Heavens<![CDATA[Entropy, Vol. 18, Pages 235: A PUT-Based Approach to Automatically Extracting Quantities and Generating Final Answers for Numerical Attributes]]>
http://www.mdpi.com/1099-4300/18/6/235
Automatically extracting quantities and generating final answers for numerical attributes is very useful in many occasions, including question answering, image processing, human-computer interaction, etc. A common approach is to learn linguistics templates or wrappers and employ some algorithm or model to generate a final answer. However, building linguistics templates or wrappers is a tough task for builders. In addition, linguistics templates or wrappers are domain-dependent. To make the builder escape from building linguistics templates or wrappers, we propose a new approach to final answer generation based on Predicates-Units Table (PUT), a mini domain-independent knowledge base. It is deserved to point out that, in the following cases, quantities are not represented well. Quantities are absent of units. Quantities are perhaps wrong for a given question. Even if all of them are represented well, their units are perhaps inconsistent. These cases have a strong impact on final answer solving. One thousand nine hundred twenty-six real queries are employed to test the proposed method, and the experimental results show that the average correctness ratio of our approach is 87.1%.Entropy2016-06-22186Article10.3390/e180602352351099-43002016-06-22doi: 10.3390/e18060235Yaqing LiuLidong WangRong ChenYingjie SongYalin Cai<![CDATA[Entropy, Vol. 18, Pages 234: Constant Slope Maps and the Vere-Jones Classification]]>
http://www.mdpi.com/1099-4300/18/6/234
We study continuous countably-piecewise monotone interval maps and formulate conditions under which these are conjugate to maps of constant slope, particularly when this slope is given by the topological entropy of the map. We confine our investigation to the Markov case and phrase our conditions in the terminology of the Vere-Jones classification of infinite matrices.Entropy2016-06-22186Article10.3390/e180602342341099-43002016-06-22doi: 10.3390/e18060234Jozef BobokHenk Bruin<![CDATA[Entropy, Vol. 18, Pages 232: 3D Buoyancy-Induced Flow and Entropy Generation of Nanofluid-Filled Open Cavities Having Adiabatic Diamond Shaped Obstacles]]>
http://www.mdpi.com/1099-4300/18/6/232
A three dimensional computational solution has been obtained to investigate the natural convection and entropy generation of nanofluid-filled open cavities with an adiabatic diamond shaped obstacle. In the model, the finite volume technique was used to solve the governing equations. Based on the configuration, the cavity is heated from the left vertical wall and the diamond shape was chosen as adiabatic. Effects of nanoparticle volume fraction, Rayleigh number (103 ≤ Ra ≤ 106) and width of diamond shape were studied as governing parameters. It was found that the geometry of the partition is a control parameter for heat and fluid flow inside the open enclosure.Entropy2016-06-21186Article10.3390/e180602322321099-43002016-06-21doi: 10.3390/e18060232Lioua KolsiOmid MahianHakan ÖztopWalid AichMohamed BorjiniNidal Abu-HamdehHabib Aissia<![CDATA[Entropy, Vol. 18, Pages 231: Product Design Time Forecasting by Kernel-Based Regression with Gaussian Distribution Weights]]>
http://www.mdpi.com/1099-4300/18/6/231
There exist problems of small samples and heteroscedastic noise in design time forecasts. To solve them, a kernel-based regression with Gaussian distribution weights (GDW-KR) is proposed here. GDW-KR maintains a Gaussian distribution over weight vectors for the regression. It is applied to seek the least informative distribution from those that keep the target value within the confidence interval of the forecast value. GDW-KR inherits the benefits of Gaussian margin machines. By assuming a Gaussian distribution over weight vectors, it could simultaneously offer a point forecast and its confidence interval, thus providing more information about product design time. Our experiments with real examples verify the effectiveness and flexibility of GDW-KR.Entropy2016-06-21186Article10.3390/e180602312311099-43002016-06-21doi: 10.3390/e18060231Zhi-Gen ShangHong-Sen Yan<![CDATA[Entropy, Vol. 18, Pages 233: Entropic Measure of Time, and Gas Expansion in Vacuum]]>
http://www.mdpi.com/1099-4300/18/6/233
The study considers advantages of the introduced measure of time based on the entropy change under irreversible processes (entropy production). Using the example of non-equilibrium expansion of an ideal gas in vacuum, such a measure is introduced. It is shown that, in the general case, this measure of time proves to be nonlinearly related to the reference measure assumed uniform by convention. The connection between this result and the results of other authors investigating the measure of time in some biological and cosmological problems is noted.Entropy2016-06-21186Article10.3390/e180602332331099-43002016-06-21doi: 10.3390/e18060233Leonid MartyushevEvgenii Shaiapin<![CDATA[Entropy, Vol. 18, Pages 229: Investigating Aging-Related Changes in the Coordination of Agonist and Antagonist Muscles Using Fuzzy Entropy and Mutual Information]]>
http://www.mdpi.com/1099-4300/18/6/229
Aging alters muscular coordination patterns. This study aimed to investigate aging-related changes in the coordination of agonist and antagonist muscles from two aspects, the activities of individual muscles and the inter-muscular coupling. Eighteen young subjects and 10 elderly subjects were recruited to modulate the agonist muscle activity to track a target during voluntary isometric elbow flexion and extension. Normalized muscle activation and fuzzy entropy (FuzzyEn) were applied to depict the activities of biceps and triceps. Mutual information (MI) was utilized to measure the inter-muscular coupling between biceps and triceps. The agonist activation decreased and the antagonist activation increased significantly during elbow flexion and extension with aging. FuzzyEn values of agonist electromyogram (EMG) were similar between the two age groups. FuzzyEn values of antagonist EMG increased significantly with aging during elbow extension. MI decreased significantly with aging during elbow extension. These results indicated increased antagonist co-activation and decreased inter-muscular coupling with aging during elbow extension, which might result from the reduced reciprocal inhibition and the recruitment of additional cortical-spinal pathways connected to biceps. Based on FuzzyEn and MI, this study provided a comprehensive understanding of the mechanisms underlying the aging-related changes in the coordination of agonist and antagonist muscles.Entropy2016-06-20186Article10.3390/e180602292291099-43002016-06-20doi: 10.3390/e18060229Wenbo SunJingtao LiangYuan YangYuanyu WuTiebin YanRong Song<![CDATA[Entropy, Vol. 18, Pages 213: Optimal Noise Enhanced Signal Detection in a Unified Framework]]>
http://www.mdpi.com/1099-4300/18/6/213
In this paper, a new framework for variable detectors is formulated in order to solve different noise enhanced signal detection optimal problems, where six different disjoint sets of detector and discrete vector pairs are defined according to the two inequality-constraints on detection and false-alarm probabilities. Then theorems and algorithms constructed based on the new framework are presented to search the optimal noise enhanced solutions to maximize the relative improvements of the detection and the false-alarm probabilities, respectively. Further, the optimal noise enhanced solution of the maximum overall improvement is obtained based on the new framework and the relationship among the three maximums is presented. In addition, the sufficient conditions for improvability or non-improvability under the two certain constraints are given. Finally, numerous examples are presented to illustrate the theoretical results and the proofs of the main theorems are given in the Appendix.Entropy2016-06-17186Article10.3390/e180602132131099-43002016-06-17doi: 10.3390/e18060213Ting YangShujun LiuMingchun TangKui ZhangXinzheng Zhang<![CDATA[Entropy, Vol. 18, Pages 230: On Extensions over Semigroups and Applications]]>
http://www.mdpi.com/1099-4300/18/6/230
Applying a theorem according to Rhemtulla and Formanek, we partially solve an open problem raised by Hochman with an affirmative answer. Namely, we show that if G is a countable torsion-free locally nilpotent group that acts by homeomorphisms on X, and S ⊂ G is a subsemigroup not containing the unit of G such that f ∈ 〈 1 , s f : s ∈ S 〉 for every f ∈ C ( X ) , then ( X , G ) has zero topological entropy.Entropy2016-06-15186Article10.3390/e180602302301099-43002016-06-15doi: 10.3390/e18060230Wen HuangLei JinXiangdong Ye<![CDATA[Entropy, Vol. 18, Pages 197: Information and Selforganization: A Unifying Approach and Applications]]>
http://www.mdpi.com/1099-4300/18/6/197
Selforganization is a process by which the interaction between the parts of a complex system gives rise to the spontaneous emergence of patterns, structures or functions. In this interaction the system elements exchange matter, energy and information. We focus our attention on the relations between selforganization and information in general and the way they are linked to cognitive processes in particular. We do so from the analytical and mathematical perspective of the “second foundation of synergetics” and its “synergetic computer” and with reference to several forms of information: Shannon’s information that deals with the quantity of a message irrespective of its meaning, semantic and pragmatic forms of information that deal with the meaning conveyed by messages and information adaptation that refers to the interplay between Shannon’s information and semantic or pragmatic information. We first elucidate the relations between selforganization and information theoretically and mathematically and then by means of specific case studies.Entropy2016-06-14186Article10.3390/e180601971971099-43002016-06-14doi: 10.3390/e18060197Hermann HakenJuval Portugali<![CDATA[Entropy, Vol. 18, Pages 228: Discrete Time Dirac Quantum Walk in 3+1 Dimensions]]>
http://www.mdpi.com/1099-4300/18/6/228
In this paper we consider quantum walks whose evolution converges to the Dirac equation in the limit of small wave-vectors. We show exact Fast Fourier implementation of the Dirac quantum walks in one, two, and three space dimensions. The behaviour of particle states—defined as states smoothly peaked in some wave-vector eigenstate of the walk—is described by an approximated dispersive differential equation that for small wave-vectors gives the usual Dirac particle and antiparticle kinematics. The accuracy of the approximation is provided in terms of a lower bound on the fidelity between the exactly evolved state and the approximated one. The jittering of the position operator expectation value for states having both a particle and an antiparticle component is analytically derived and observed in the numerical implementations.Entropy2016-06-14186Article10.3390/e180602282281099-43002016-06-14doi: 10.3390/e18060228Giacomo D’ArianoNicola MoscoPaolo PerinottiAlessandro Tosini<![CDATA[Entropy, Vol. 18, Pages 227: Fractional-Order Grey Prediction Method for Non-Equidistant Sequences]]>
http://www.mdpi.com/1099-4300/18/6/227
There are lots of non-equidistant sequences in actual applications due to random sampling, imperfect sensors, event-triggered phenomena, and so on. A new grey prediction method for non-equidistant sequences (r-NGM(1,1)) is proposed based on the basic grey model and the developed fractional-order non-equidistant accumulated generating operation (r-NAGO), and the accumulated order is extended from the positive to the negative. The whole r-NAGO deletes the randomness of original sequences in the form of weighted accumulation and improves the exponential law of accumulated sequences. Furthermore, the Levenberg–Marquardt algorithm is used to optimize the fractional order. The optimal r-NGM(1,1) can enhance the predicting performance of the non-equidistant sequences. Results of three practical cases in engineering applications demonstrate that the proposed r-NGM(1,1) provides the significant predicting performance compared with the traditional grey model.Entropy2016-06-14186Article10.3390/e180602272271099-43002016-06-14doi: 10.3390/e18060227Yue ShenBo HePing Qin<![CDATA[Entropy, Vol. 18, Pages 171: Information-Theoretic-Entropy Based Weight Aggregation Method in Multiple-Attribute Group Decision-Making]]>
http://www.mdpi.com/1099-4300/18/6/171
Weight aggregation is the key process to solve a multiple-attribute group decision-making (MAGDM) problem. This paper is trying to propose a possible approach to objectivize subjective information and to aggregate information from attribute values themselves and decision-makers’ judgment. An MAGDM problem without information about decision-makers’ and attributes’ weight is considered. In order to define decision-makers’ subjective preference, their utility function is introduced. The attributes value matrix is converted into a subjective attributes value matrix based on their subjective judgment on attribute values. By utilizing the entropy weighting technique, decision-maker’s subjective weight on attributes and objective weight on attributes are determined individually based on the subjective attributes value matrix and attributes value matrix. Based on the principle of minimum cross-entropy, all decision-makers’ subjective weights are integrated into a single weight vector that is closest to all decision-makers’ judgment without any extra information added. Then, by applying the principle of minimum cross-entropy again, a weight aggregation method is proposed to combine the subjective and objective weight of attributes. Finally, an MAGDM example of project choosing is presented to illustrate the procedure of the proposed method.Entropy2016-06-14186Article10.3390/e180601711711099-43002016-06-14doi: 10.3390/e18060171Dayi HeJiaqiang XuXiaoling Chen<![CDATA[Entropy, Vol. 18, Pages 226: Nano-Crystallization of High-Entropy Amorphous NbTiAlSiWxNy Films Prepared by Magnetron Sputtering]]>
http://www.mdpi.com/1099-4300/18/6/226
High-entropy amorphous NbTiAlSiWxNy films (x = 0 or 1, i.e., NbTiAlSiNy and NbTiAlSiWNy) were prepared by magnetron sputtering method in the atmosphere of a mixture of N2 + Ar (N2 + Ar = 24 standard cubic centimeter per minute (sccm)), where N2 = 0, 4, and 8 sccm). All the as-deposited films present amorphous structures, which remain stable at 700 °C for over 24 h. After heat treatment at 1000 °C the films began to crystalize, and while the NbTiAlSiNy films (N2 = 4, 8 sccm) exhibit a face-centered cubic (FCC) structure, the NbTiAlSiW metallic films show a body-centered cubic (BCC) structure and then transit into a FCC structure composed of nanoscaled particles with increasing nitrogen flow rate. The hardness and modulus of the as-deposited NbTiAlSiNy films reach maximum values of 20.5 GPa and 206.8 GPa, respectively. For the as-deposited NbTiAlSiWNy films, both modulus and hardness increased to maximum values of 13.6 GPa and 154.4 GPa, respectively, and then decrease as the N2 flow rate is increased. Both films could be potential candidates for protective coatings at high temperature.Entropy2016-06-13186Article10.3390/e180602262261099-43002016-06-13doi: 10.3390/e18060226Wenjie ShengXiao YangCong WangYong Zhang<![CDATA[Entropy, Vol. 18, Pages 224: Entropy Generation on MHD Eyring–Powell Nanofluid through a Permeable Stretching Surface]]>
http://www.mdpi.com/1099-4300/18/6/224
In this article, entropy generation of an Eyring–Powell nanofluid through a permeable stretching surface has been investigated. The impact of magnetohydrodynamics (MHD) and nonlinear thermal radiation are also taken into account. The governing flow problem is modeled with the help of similarity transformation variables. The resulting nonlinear ordinary differential equations are solved numerically with the combination of the Successive linearization method and Chebyshev spectral collocation method. The impact of all the emerging parameters such as Hartmann number, Prandtl number, radiation parameter, Lewis number, thermophoresis parameter, Brownian motion parameter, Reynolds number, fluid parameter, and Brinkmann number are discussed with the help of graphs and tables. It is observed that the influence of the magnetic field opposes the flow. Moreover, entropy generation profile behaves as an increasing function of all the physical parameters.Entropy2016-06-08186Article10.3390/e180602242241099-43002016-06-08doi: 10.3390/e18060224Muhammad BhattiTehseen AbbasMohammad RashidiMohamed AliZhigang Yang<![CDATA[Entropy, Vol. 18, Pages 225: Extreme Learning Machine for Multi-Label Classification]]>
http://www.mdpi.com/1099-4300/18/6/225
Extreme learning machine (ELM) techniques have received considerable attention in the computational intelligence and machine learning communities because of the significantly low computational time required for training new classifiers. ELM provides solutions for regression, clustering, binary classification, multiclass classifications and so on, but not for multi-label learning. Multi-label learning deals with objects having multiple labels simultaneously, which widely exist in real-world applications. Therefore, a thresholding method-based ELM is proposed in this paper to adapt ELM to multi-label classification, called extreme learning machine for multi-label classification (ELM-ML). ELM-ML outperforms other multi-label classification methods in several standard data sets in most cases, especially for applications which only have a small labeled data set.Entropy2016-06-08186Article10.3390/e180602252251099-43002016-06-08doi: 10.3390/e18060225Xia SunJingting XuChangmeng JiangJun FengSu-Shing ChenFeijuan He<![CDATA[Entropy, Vol. 18, Pages 223: Entropy Generation on Nanofluid Flow through a Horizontal Riga Plate]]>
http://www.mdpi.com/1099-4300/18/6/223
In this article, entropy generation on viscous nanofluid through a horizontal Riga plate has been examined. The present flow problem consists of continuity, linear momentum, thermal energy, and nanoparticle concentration equation which are simplified with the help of Oberbeck-Boussinesq approximation. The resulting highly nonlinear coupled partial differential equations are solved numerically by means of the shooting method (SM). The expression of local Nusselt number and local Sherwood number are also taken into account and discussed with the help of table. The physical influence of all the emerging parameters such as Brownian motion parameter, thermophoresis parameter, Brinkmann number, Richardson number, nanoparticle flux parameter, Lewis number and suction parameter are demonstrated graphically. In particular, we conferred their influence on velocity profile, temperature profile, nanoparticle concentration profile and Entropy profile.Entropy2016-06-08186Article10.3390/e180602232231099-43002016-06-08doi: 10.3390/e18060223Tehseen AbbasMuhammad AyubMuhammad BhattiMohammad RashidiMohamed Ali<![CDATA[Entropy, Vol. 18, Pages 222: Stimuli-Magnitude-Adaptive Sample Selection for Data-Driven Haptic Modeling]]>
http://www.mdpi.com/1099-4300/18/6/222
Data-driven haptic modeling is an emerging technique where contact dynamics are simulated and interpolated based on a generic input-output matching model identified by data sensed from interaction with target physical objects. In data-driven modeling, selecting representative samples from a large set of data in a way that they can efficiently and accurately describe the whole dataset has been a long standing problem. This paper presents a new algorithm for the sample selection where the variances of output are observed for selecting representative input-output samples in order to ensure the quality of output prediction. The main idea is that representative pairs of input-output are chosen so that the ratio of the standard deviation to the mean of the corresponding output group does not exceed an application-dependent threshold. This output- and standard deviation-based sample selection is very effective in applications where the variance or relative error of the output should be kept within a certain threshold. This threshold is used for partitioning the input space using Binary Space Partitioning-tree (BSP-tree) and k-means algorithms. We apply the new approach to data-driven haptic modeling scenario where the relative error of the output prediction result should be less than a perceptual threshold. For evaluation, the proposed algorithm is compared to two state-of-the-art sample selection algorithms for regression tasks. Four kinds of haptic related behavior–force datasets are tested. The results showed that the proposed algorithm outperformed the others in terms of output-approximation quality and computational complexity.Entropy2016-06-07186Article10.3390/e180602222221099-43002016-06-07doi: 10.3390/e18060222Arsen AbdulaliWaseem HassanSeokhee Jeon<![CDATA[Entropy, Vol. 18, Pages 220: Zero Entropy Is Generic]]>
http://www.mdpi.com/1099-4300/18/6/220
Dan Rudolph showed that for an amenable group, Γ, the generic measure-preserving action of Γ on a Lebesgue space has zero entropy. Here, this is extended to nonamenable groups. In fact, the proof shows that every action is a factor of a zero entropy action! This uses the strange phenomena that in the presence of nonamenability, entropy can increase under a factor map. The proof uses Seward’s recent generalization of Sinai’s Factor Theorem, the Gaboriau–Lyons result and my theorem that for every nonabelian free group, all Bernoulli shifts factor onto each other.Entropy2016-06-04186Article10.3390/e180602202201099-43002016-06-04doi: 10.3390/e18060220Lewis Bowen<![CDATA[Entropy, Vol. 18, Pages 221: Application of Entropy-Based Metrics to Identify Emotional Distress from Electroencephalographic Recordings]]>
http://www.mdpi.com/1099-4300/18/6/221
Recognition of emotions is still an unresolved challenge, which could be helpful to improve current human-machine interfaces. Recently, nonlinear analysis of some physiological signals has shown to play a more relevant role in this context than their traditional linear exploration. Thus, the present work introduces for the first time the application of three recent entropy-based metrics: sample entropy (SE), quadratic SE (QSE) and distribution entropy (DE) to discern between emotional states of calm and negative stress (also called distress). In the last few years, distress has received growing attention because it is a common negative factor in the modern lifestyle of people from developed countries and, moreover, it may lead to serious mental and physical health problems. Precisely, 279 segments of 32-channel electroencephalographic (EEG) recordings from 32 subjects elicited to be calm or negatively stressed have been analyzed. Results provide that QSE is the first single metric presented to date with the ability to identify negative stress. Indeed, this metric has reported a discriminant ability of around 70%, which is only slightly lower than the one obtained by some previous works. Nonetheless, discriminant models from dozens or even hundreds of features have been previously obtained by using advanced classifiers to yield diagnostic accuracies about 80%. Moreover, in agreement with previous neuroanatomy findings, QSE has also revealed notable differences for all the brain regions in the neural activation triggered by the two considered emotions. Consequently, given these results, as well as easy interpretation of QSE, this work opens a new standpoint in the detection of emotional distress, which may gain new insights about the brain’s behavior under this negative emotion.Entropy2016-06-03186Article10.3390/e180602212211099-43002016-06-03doi: 10.3390/e18060221Beatriz García-MartínezArturo Martínez-RodrigoRoberto Zangróniz CantabranaJose Pastor GarcíaRaúl Alcaraz<![CDATA[Entropy, Vol. 18, Pages 218: Single Neuron Stochastic Predictive PID Control Algorithm for Nonlinear and Non-Gaussian Systems Using the Survival Information Potential Criterion]]>
http://www.mdpi.com/1099-4300/18/6/218
This paper presents a novel stochastic predictive tracking control strategy for nonlinear and non-Gaussian stochastic systems based on the single neuron controller structure in the framework of information theory. Firstly, in order to characterize the randomness of the control system, survival information potential (SIP), instead of entropy, is adopted to formulate the performance index, which is not shift-invariant, i.e., its value varies with the change of the distribution location. Then, the optimal weights of the single neuron controller can be obtained by minimizing the presented SIP based predictive control criterion. Furthermore, mean-square convergence of the proposed control algorithm is also analyzed from the energy conservation perspective. Finally, a numerical example is given to show the effectiveness of the proposed method.Entropy2016-06-03186Article10.3390/e180602182181099-43002016-06-03doi: 10.3390/e18060218Mifeng RenTing ChengJunghui ChenXinying XuLan Cheng<![CDATA[Entropy, Vol. 18, Pages 217: Empirical Laws and Foreseeing the Future of Technological Progress]]>
http://www.mdpi.com/1099-4300/18/6/217
The Moore’s law (ML) is one of many empirical expressions that is used to characterize natural and artificial phenomena. The ML addresses technological progress and is expected to predict future trends. Yet, the “art” of predicting is often confused with the accurate fitting of trendlines to past events. Presently, data-series of multiple sources are available for scientific and computational processing. The data can be described by means of mathematical expressions that, in some cases, follow simple expressions and empirical laws. However, the extrapolation toward the future is considered with skepticism by the scientific community, particularly in the case of phenomena involving complex behavior. This paper addresses these issues in the light of entropy and pseudo-state space. The statistical and dynamical techniques lead to a more assertive perspective on the adoption of a given candidate law.Entropy2016-06-02186Article10.3390/e180602172171099-43002016-06-02doi: 10.3390/e18060217António LopesJosé Tenreiro MachadoAlexandra Galhano<![CDATA[Entropy, Vol. 18, Pages 219: Correction: Wolpert, D.H. The Free Energy Requirements of Biological Organisms; Implications for Evolution. Entropy 2016, 18, 138]]>
http://www.mdpi.com/1099-4300/18/6/219
The following corrections should be made to the published paper [1]: [...]Entropy2016-06-02186Correction10.3390/e180602192191099-43002016-06-02doi: 10.3390/e18060219David Wolpert