Display options:
Normal
Show Abstracts
Compact
Select/unselect all
Displaying article 131
Editorial
p. 726728
Received: 2 January 2014; Accepted: 15 January 2014 / Published: 27 January 2014
Show/Hide Abstract
 PDF Fulltext (172 KB)  HTML Fulltext  XML Fulltext
Abstract: In 2013, Entropy instituted the “Best Paper” award to recognize outstanding papers in the area of entropy and information studies published in Entropy [1]. We are pleased to announce the “Entropy Best Paper Award” for 2014. Nominations were selected by the EditorinChief and designated Editorial Board Members from all the papers published in 2010. [...]
p. 11691177
Received: 24 February 2014; Accepted: 24 February 2014 / Published: 24 February 2014
Show/Hide Abstract
 PDF Fulltext (197 KB)  HTML Fulltext  XML Fulltext
Abstract: The editors of Entropy would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2013 [...]
Research
p. 627644
Received: 5 December 2013; in revised form: 14 January 2014 / Accepted: 14 January 2014 / Published: 23 January 2014
Show/Hide Abstract
 PDF Fulltext (944 KB)  HTML Fulltext  XML Fulltext
Abstract: A numerical algorithm to compute the topological entropy of multimodal maps is proposed. This algorithm results from a closed formula containing the socalled minmax symbols, which are closely related to the kneading symbols. Furthermore, it simplifies a previous algorithm, also based on minmax symbols, which was originally proposed for twice differentiable multimodal maps. The new algorithm has been benchmarked against the old one with a number of multimodal maps, the results being reported in the paper. The comparison is favorable to the new algorithm, except in the unimodal case.
p. 645674
Received: 2 August 2013; in revised form: 11 January 2014 / Accepted: 14 January 2014 / Published: 23 January 2014
Show/Hide Abstract
 Cited by 1  PDF Fulltext (2141 KB)  HTML Fulltext  XML Fulltext
Abstract: The game of football demands new computational approaches to measure individual and collective performance. Understanding the phenomena involved in the game may foster the identification of strengths and weaknesses, not only of each player, but also of the whole team. The development of assertive quantitative methodologies constitutes a key element in sports training. In football, the predictability and stability inherent in the motion of a given player may be seen as one of the most important concepts to fully characterise the variability of the whole team. This paper characterises the predictability and stability levels of players during an official football match. A Fractional Calculus (FC) approach to define a player’s trajectory. By applying FC, one can benefit from newly considered modeling perspectives, such as the fractional coefficient, to estimate a player’s predictability and stability. This paper also formulates the concept of attraction domain, related to the tactical region of each player, inspired by stability theory principles. To compare the variability inherent in the player’s process variables (e.g., distance covered) and to assess his predictability and stability, entropy measures are considered. Experimental results suggest that the most predictable player is the goalkeeper while, conversely, the most unpredictable players are the midfielders. We also conclude that, despite his predictability, the goalkeeper is the most unstable player, while lateral defenders are the most stable during the match.
(This article belongs to the Special Issue
Dynamical Systems )
Print Edition available
p. 675698
Received: 17 October 2013; in revised form: 17 December 2013 / Accepted: 14 January 2014 / Published: 23 January 2014
Show/Hide Abstract
 PDF Fulltext (525 KB)  HTML Fulltext  XML Fulltext
Abstract: The paper presents a new approach to restoration characteristics randomized models under small amounts of input and output data. This approach proceeds from involving randomized static and dynamic models and estimating the probabilistic characteristics of their parameters. We consider static and dynamic models described by Volterra polynomials. The procedures of robust parametric and nonparametric estimation are constructed by exploiting the entropy concept based on the generalized informational Boltzmann’s and Fermi’s entropies.
p. 699725
Received: 23 October 2013; in revised form: 18 November 2013 / Accepted: 17 December 2013 / Published: 24 January 2014
Show/Hide Abstract
 PDF Fulltext (258 KB)  HTML Fulltext  XML Fulltext
Abstract: I explore the reduction of thermodynamics to statistical mechanics by treating the former as a control theory: A theory of which transitions between states can be induced on a system (assumed to obey some known underlying dynamics) by means of operations from a fixed list. I recover the results of standard thermodynamics in this framework on the assumption that the available operations do not include measurements which affect subsequent choices of operations. I then relax this assumption and use the framework to consider the vexed questions of Maxwell’s demon and Landauer’s principle. Throughout, I assume rather than prove the basic irreversibility features of statistical mechanics, taking care to distinguish them from the conceptually distinct assumptions of thermodynamics proper.
p. 729746
Received: 10 December 2013; in revised form: 23 January 2014 / Accepted: 26 January 2014 / Published: 7 February 2014
Show/Hide Abstract
 PDF Fulltext (399 KB)  HTML Fulltext  XML Fulltext
Abstract: In this paper, the problem of stabilizing a class of fractionalorder chaotic systems with sector and deadzone nonlinear inputs is investigated. The effects of model uncertainties and external disturbances are fully taken into account. Moreover, the bounds of both model uncertainties and external disturbances are assumed to be unknown in advance. To deal with the system’s nonlinear items and unknown bounded uncertainties, an adaptive fractionalorder sliding mode (AFSM) controller is designed. Then, Lyapunov’s stability theory is used to prove the stability of the designed control scheme. Finally, two simulation examples are given to verify the effectiveness and robustness of the proposed control approach.
(This article belongs to the Special Issue
Dynamical Systems )
Print Edition available
p. 747769
Received: 9 December 2013; in revised form: 10 January 2014 / Accepted: 28 January 2014 / Published: 10 February 2014
Show/Hide Abstract
 Cited by 1  PDF Fulltext (1322 KB)  HTML Fulltext  XML Fulltext
Abstract: We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest threemonth season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of twoparameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the midnorth coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described.
p. 770788
Received: 22 November 2013; in revised form: 3 January 2014 / Accepted: 27 January 2014 / Published: 10 February 2014
Show/Hide Abstract
 PDF Fulltext (2324 KB)  HTML Fulltext  XML Fulltext
Abstract: Very recently, several chaosbased image ciphers using a bitlevel permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the timeconsuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaosbased image cipher with a 3D cat mapbased spatial bitlevel permutation strategy is proposed. Compared with those recently proposed bitlevel permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bitplanes rather than within the same bitplane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme.
p. 789813
Received: 16 December 2013; in revised form: 28 January 2014 / Accepted: 29 January 2014 / Published: 10 February 2014
Show/Hide Abstract
 PDF Fulltext (413 KB)  HTML Fulltext  XML Fulltext

Supplementary Files
Abstract: The paper presents a framework for autonomous search for a diffusive emitting source of a tracer (e.g., aerosol, gas) in an environment with an unknown map of randomly placed and shaped obstacles. The measurements of the tracer concentration are sporadic, noisy and without directional information. The search domain is discretised and modelled by a finite twodimensional lattice. The links in the lattice represent the traversable paths for emitted particles and for the searcher. A missing link in the lattice indicates a blocked path due to an obstacle. The searcher must simultaneously estimate the source parameters, the map of the search domain and its own location within the map. The solution is formulated in the sequential Bayesian framework and implemented as a RaoBlackwellised particle filter with entropyreduction motion control. The numerical results demonstrate the concept and its performance.
p. 814824
Received: 5 November 2013; in revised form: 7 January 2014 / Accepted: 26 January 2014 / Published: 10 February 2014
Show/Hide Abstract
 PDF Fulltext (216 KB)  HTML Fulltext  XML Fulltext
Abstract: The minimum error entropy (MEE) estimation is concerned with the estimation of a certain random variable (unknown variable) based on another random variable (observation), so that the entropy of the estimation error is minimized. This estimation method may outperform the wellknown minimum mean square error (MMSE) estimation especially for nonGaussian situations. There is an important performance bound on the MEE estimation, namely the WS lower bound, which is computed as the conditional entropy of the unknown variable given observation. Though it has been known in the literature for a considerable time, up to now there is little study on this performance bound. In this paper, we reexamine the WS lower bound. Some basic properties of the WS lower bound are presented, and the characterization of Gaussian distribution using the WS lower bound is investigated.
p. 825853
Received: 20 November 2013; in revised form: 17 January 2014 / Accepted: 28 January 2014 / Published: 12 February 2014
Show/Hide Abstract
 PDF Fulltext (335 KB)  HTML Fulltext  XML Fulltext
Abstract: A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior in mean square error to two and three stage least squares. Analytical results are provided relating to asymptotic properties of the estimator and associated hypothesis testing statistics. Monte Carlo experiments are also used to provide evidence on the power and size of test statistics. An empirical application is included to demonstrate the practical implementation of the estimator.
p. 854869
Received: 13 October 2013; in revised form: 10 January 2014 / Accepted: 28 January 2014 / Published: 13 February 2014
Show/Hide Abstract
 PDF Fulltext (325 KB)  HTML Fulltext  XML Fulltext
Abstract: Feature or variable selection still remains an unsolved problem, due to the infeasible evaluation of all the solution space. Several algorithms based on heuristics have been proposed so far with successful results. However, these algorithms were not designed for considering very large datasets, making their execution impossible, due to the memory and time limitations. This paper presents an implementation of a genetic algorithm that has been parallelized using the classical island approach, but also considering graphic processing units to speed up the computation of the fitness function. Special attention has been paid to the population evaluation, as well as to the migration operator in the parallel genetic algorithm (GA), which is not usually considered too significant; although, as the experiments will show, it is crucial in order to obtain robust results.
(This article belongs to the Special Issue
Big Data )
p. 870884
Received: 28 October 2013; in revised form: 5 February 2014 / Accepted: 6 February 2014 / Published: 13 February 2014
Show/Hide Abstract
 PDF Fulltext (2016 KB)  HTML Fulltext  XML Fulltext
Abstract: A series of high entropy alloys (HEAs), Al_{x} NbTiMoV, was produced by a vacuum arcmelting method. Their microstructures and compressive mechanical behavior at room temperature were investigated. It has been found that a single solidsolution phase with a bodycentered cubic (BCC) crystal structure forms in these alloys. Among these alloys, Al_{0.5} NbTiMoV reaches the highest yield strength (1,625 MPa), which should be attributed to the considerable solidsolution strengthening behavior. Furthermore, serration and crackling noises near the yielding point was observed in the NbTiMoV alloy, which represents the first such reported phenomenon at room temperature in HEAs.
p. 885894
Received: 25 November 2013; in revised form: 18 December 2013 / Accepted: 23 January 2014 / Published: 13 February 2014
Show/Hide Abstract
 PDF Fulltext (378 KB)  HTML Fulltext  XML Fulltext
Abstract: A numerical experiment of ideal stochastic motion of a particle subject to conservative forces and Gaussian noise reveals that the path probability depends exponentially on action. This distribution implies a fundamental principle generalizing the least action principle of the Hamiltonian/Lagrangian mechanics and yields an extended formalism of mechanics for random dynamics. Within this theory, Liouville’s theorem of conservation of phase density distribution must be modified to allow time evolution of phase density and consequently the Boltzmann H theorem. We argue that the gap between the regular Newtonian dynamics and the random dynamics was not considered in the criticisms of the H theorem.
p. 912920
Received: 4 December 2013; in revised form: 22 January 2014 / Accepted: 7 February 2014 / Published: 14 February 2014
Show/Hide Abstract
 PDF Fulltext (281 KB)  HTML Fulltext  XML Fulltext
Abstract: The possibilities of different phase transitions to cBN with Li_{3} N as catalyst at high temperature and high pressure (1600–2200 K, 4.8–6.0 GPa) are analyzed, in the framework of the second law of thermodynamics. The Gibbs free energy (∆G ) of three reactions which may happen in the Li_{3} NBN system: hBN + Li_{3} N→Li_{3} BN_{2} , hBN→cBN, and Li_{3} BN_{2} →cBN + Li_{3} N, is calculated, with the influence of high temperature and high pressure on volume included. We show that ∆G of hBN + Li_{3} N→Li_{3} BN_{2} and hBN→cBN are between −35~−10 KJ·mol^{−1} and −25~−19 KJ·mol^{−1} , respectively. However, ∆G of Li_{3} BN_{2} →cBN + Li_{3} N can be positive or negative. The area formed by the positive data is a Vshaped area, which covers the most part of the cBN growing Vshaped area. It confirms that Li_{3} BN_{2} is stable in the PT area of cBN synthesis, and cBN is probably transformed directly from hBN. Analysis suggests that Li_{3} BN_{2} promotes the transition from hBN to cBN.
p. 921942
Received: 21 October 2013; in revised form: 27 January 2014 / Accepted: 7 February 2014 / Published: 17 February 2014
Show/Hide Abstract
 PDF Fulltext (283 KB)  HTML Fulltext  XML Fulltext
Abstract: Estimating a discrepancy between two probability distributions from samples is an important task in statistics and machine learning. There are mainly two classes of discrepancy measures: distance measures based on the density difference, such as the L_{p} distances, and divergence measures based on the density ratio, such as the Φ divergences. The intersection of these two classes is the L_{1} distance measure, and thus, it can be estimated either based on the density difference or the density ratio. In this paper, we first show that the Bregman scores, which are widely employed for the estimation of probability densities in statistical data analysis, allows us to estimate the density difference and the density ratio directly without separately estimating each probability distribution. We then theoretically elucidate the robustness of these estimators and present numerical experiments.
p. 943952
Received: 30 September 2013; in revised form: 10 February 2014 / Accepted: 10 February 2014 / Published: 17 February 2014
Show/Hide Abstract
 PDF Fulltext (456 KB)  HTML Fulltext  XML Fulltext
Abstract: In most applications of optical computed tomography (OpCT), limitedview problems are often encountered, which can be solved to a certain extent with typical OpCT reconstructive algorithms. The concept of entropy first emerged in information theory has been introduced into OpCT algorithms, such as maximum entropy (ME) algorithms and cross entropy (CE) algorithms, which have demonstrated their superiority over traditional OpCT algorithms, yet have their own limitations. A fused entropy (FE) algorithm, which follows an optimized criterion combining selfadaptively ME with CE, is proposed and investigated by comparisons with ME, CE and some traditional OpCT algorithms. Reconstructed results of several physical models show this FE algorithm has a good convergence and can achieve better precision than other algorithms, which verifies the feasibility of FE as an approach of optimizing computation, not only for OpCT, but also for other image processing applications.
p. 953967
Received: 2 January 2014; in revised form: 10 February 2014 / Accepted: 10 February 2014 / Published: 17 February 2014
Show/Hide Abstract
 PDF Fulltext (320 KB)  HTML Fulltext  XML Fulltext
Abstract: Entropy is the most used and often abused concept in science, but also in philosophy and society. Further confusions are produced by some attempts to generalize entropy with similar but not the same concepts in other disciplines. The physical meaning of phenomenological, thermodynamic entropy is reasoned and elaborated by generalizing Clausius definition with inclusion of generated heat, since it is irrelevant if entropy is changed due to reversible heat transfer or irreversible heat generation. Irreversible, caloric heat transfer is introduced as complementing reversible heat transfer. It is also reasoned and thus proven why entropy cannot be destroyed but is always generated (and thus overall increased) locally and globally, at every space and time scales, without any exception. It is concluded that entropy is a thermal displacement (dynamic thermalvolume) of thermal energy due to absolute temperature as a thermal potential (dQ = TdS ), and thus associated with thermal heat and absolute temperature, i.e. , distribution of thermal energy within thermal microparticles in space. Entropy is an integral measure of (random) thermal energy redistribution (due to heat transfer and/or irreversible heat generation) within a material system structure in space, per absolute temperature level: dS = dQ_{Sys} /T = mC_{Sys} dT/T , thus logarithmic integral function, with J/K unit. It may be also expressed as a measure of “thermal disorder”, being related to logarithm of number of all thermal, dynamic microstates W (their position and momenta), S = k_{B} lnW , or to the sum of their logarithmic probabilities S = −k_{B} ∑p_{i} lnp_{i} , that correspond to, or are consistent with the given thermodynamic macrostate. The number of thermal microstates W , is correlated with macroproperties temperature T and volume V for ideal gases. A system form and/or functional order or disorder are not (thermal) energy order/disorder and the former is not related to Thermodynamic entropy. Expanding entropy to any type of disorder or information is a source of many misconceptions. Granted, there are certain benefits of simplified statistical descriptions to better comprehend the randomness of thermal motion and related physical quantities, but the limitations should be stated so the generalizations are not overstretched and the real physics overlooked, or worse discredited.
p. 968989
Received: 3 June 2013; Accepted: 18 June 2013 / Published: 17 February 2014
Show/Hide Abstract
 PDF Fulltext (276 KB)  HTML Fulltext  XML Fulltext
Abstract: This paper synthesizes a recent line of work on automated predictive model making inspired by RateDistortion theory, in particular by the Information Bottleneck method. Predictive inference is interpreted as a strategy for efficient communication. The relationship to thermodynamic efficiency is discussed. The overall aim of this paper is to explain how this information theoretic approach provides an intuitive, overarching framework for predictive inference.
p. 9901001
Received: 27 October 2013; in revised form: 19 December 2013 / Accepted: 26 January 2014 / Published: 17 February 2014
Show/Hide Abstract
 PDF Fulltext (2324 KB)  HTML Fulltext  XML Fulltext
Abstract: For the requirement of qualitybased image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of realtime space image coding systems for its simplicity.
p. 10021036
Received: 4 December 2013; Accepted: 30 January 2014 / Published: 18 February 2014
Show/Hide Abstract
 PDF Fulltext (6192 KB)  HTML Fulltext  XML Fulltext
Abstract: Markov random field models are powerful tools for the study of complex systems. However, little is known about how the interactions between the elements of such systems are encoded, especially from an informationtheoretic perspective. In this paper, our goal is to enlighten the connection between Fisher information, Shannon entropy, information geometry and the behavior of complex systems modeled by isotropic pairwise Gaussian Markov random fields. We propose analytical expressions to compute local and global versions of these measures using Besag’s pseudolikelihood function, characterizing the system’s behavior through its Fisher curve , a parametric trajectory across the information space that provides a geometric representation for the study of complex systems in which temperature deviates from infinity. Computational experiments show how the proposed tools can be useful in extracting relevant information from complex patterns. The obtained results quantify and support our main conclusion, which is: in terms of information, moving towards higher entropy states (A –> B) is different from moving towards lower entropy states (B –> A), since the Fisher curves are not the same, given a natural orientation (the direction of time).
p. 10371046
Received: 25 November 2013; in revised form: 6 January 2014 / Accepted: 10 February 2014 / Published: 19 February 2014
Show/Hide Abstract
 PDF Fulltext (323 KB)  HTML Fulltext  XML Fulltext
Abstract: The asymmetric simple exclusion process (ASEP) has become a paradigmatic toymodel of a nonequilibrium system, and much effort has been made in the past decades to compute exactly its statistics for given dynamical rules. Here, a different approach is developed; analogously to the equilibrium situation, we consider that the dynamical rules are not exactly known. Allowing for the transition rate to vary, we show that the dynamical rules that maximize the entropy production and those that maximise the rate of variation of the dynamical entropy, known as the KolmogorovSinai entropy coincide with good accuracy. We study the dependence of this agreement on the size of the system and the couplings with the reservoirs, for the original ASEP and a variant with Langmuir kinetics.
p. 10471069
Received: 12 September 2013; in revised form: 5 February 2014 / Accepted: 7 February 2014 / Published: 19 February 2014
Show/Hide Abstract
 PDF Fulltext (387 KB)  HTML Fulltext  XML Fulltext
Abstract: In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used as the best guess for the current state. This information is first applied a priori to any measurement by using it in the underlying dynamics of the system. Second, measurements of the unknown variables are taken. These two pieces of information are taken into account to determine the current state of the system. Bayesian inference is specifically designed to accommodate the problem of updating what we think of the world based on partial or uncertain information. In this paper, we present a derivation of the general Bayesian filter, then adapt it for Markov systems. A simple example is shown for pedagogical purposes. We also show that by using the Kalman assumptions or “constraints”, we can arrive at the Kalman filter using the method of maximum (relative) entropy (MrE), which goes beyond Bayesian methods. Finally, we derive a generalized, nonlinear filter using MrE, where the original Kalman Filter is a special case. We further show that the variable relationship can be any function, and thus, approximations, such as the extended Kalman filter, the unscented Kalman filter and other Kalman variants are special cases as well.
p. 10701088
Received: 25 October 2013; in revised form: 13 January 2014 / Accepted: 12 February 2014 / Published: 19 February 2014
Show/Hide Abstract
 PDF Fulltext (1089 KB)  HTML Fulltext  XML Fulltext
Abstract: With the consideration of randomness of distributed generations and loads, this paper has proposed a method for vulnerability assessment of microgrids based on complex network theory and entropy theory, which can explain the influence of the inherent structure characteristics and system internal energy distribution on the microgrid. The vulnerability assessment index is built, and the online reconfiguration model considering the vulnerability assessment of microgrid is also established. An improved cellular bat algorithm is tested on the CERTS system to implement the real time reconfiguration fast and accurately to provide the basis of theory and practice.
p. 10891100
Received: 8 November 2013; in revised form: 21 January 2014 / Accepted: 27 January 2014 / Published: 19 February 2014
Show/Hide Abstract
 PDF Fulltext (253 KB)  HTML Fulltext  XML Fulltext
Abstract: In order to find more correlations between entropy and other related quantities, an analogical analysis is conducted between thermal science and other branches of physics. Potential energy in various forms is the product of a conserved extensive quantity (for example, mass or electric charge) and an intensive quantity which is its potential (for example, gravitational potential or electrical voltage), while energy in specific form is a dissipative quantity during irreversible transfer process (for example mechanical or electrical energy will be dissipated as thermal energy). However, it has been shown that heat or thermal energy, like mass or electric charge, is conserved during heat transfer processes. When a heat transfer process is for object heating or cooling, the potential of internal energy U is the temperature T and its potential “energy” is UT /2 (called entransy and it is the simplified expression of thermomass potential energy); when a heat transfer process is for heatwork conversion, the potential of internal energy U is (1 − T _{0} /T ), and the available potential energy of a system in reversible heat interaction with the environment is U − U _{0 } − T _{0} (S − S _{0} ), then T _{0} /T and T _{0} (S − S _{0} ) are the unavailable potential and the unavailable potential energy of a system respectively. Hence, entropy is related to the unavailable potential energy per unit environmental temperature for heatwork conversion during reversible heat interaction between the system and its environment. Entropy transfer, like other forms of potential energy transfer, is the product of the heat and its potential, the reciprocal of temperature, although it is in form of the quotient of the heat and the temperature. Thus, the physical essence of entropy transfer is the unavailable potential energy transfer per unit environmental temperature. Entropy is a nonconserved, extensive, state quantity of a system, and entropy generation in an irreversible heat transfer process is proportional to the destruction of available potential energy.
p. 11011121
Received: 23 November 2013; in revised form: 24 January 2014 / Accepted: 17 February 2014 / Published: 20 February 2014
Show/Hide Abstract
 PDF Fulltext (1329 KB)  HTML Fulltext  XML Fulltext
Abstract: To realize the lagging behavior in heat conduction observed in these two decades, this paper firstly theoretically excludes the possibility that the underlying thermal inertia is a result of the time delay in heat diffusion. Instead, we verify in experiments the electrothermal analogy, wherein the thermal inertial is parameterized by thermal inductance that formulates hyperbolic heatconduction. The thermal hyperbolicity exhibits a special frequency response in Bode plot, wherein the amplitude ratios is kept flat after crossing some certain frequency, as opposed to Fourier heatconduction. We apply this specialty to design an instrument that reliably identifies thermal inductances of some materials in frequency domain. The instrument is embedded with a DSPbased frequency synthesizer capable of modulating frequencies in utmost highresolution. Thermal inertia implies a new possibility for energy storage in analogy to inductive energy storage in electricity or mechanics.
p. 11231133
Received: 16 December 2013; in revised form: 11 February 2014 / Accepted: 13 February 2014 / Published: 24 February 2014
Show/Hide Abstract
 PDF Fulltext (115 KB)  HTML Fulltext  XML Fulltext
Abstract: There are two entropybased methods to deal with linear inverse problems, which we shall call the ordinary method of maximum entropy (OME) and the method of maximum entropy in the mean (MEM). Not only doesMEM use OME as a stepping stone, it also allows for greater generality. First, because it allows to include convex constraints in a natural way, and second, because it allows to incorporate and to estimate (additive) measurement errors from the data. Here we shall see both methods in action in a specific example. We shall solve the discretized version of the problem by two variants of MEM and directly with OME. We shall see that OME is actually a particular instance of MEM, when the reference measure is a Poisson Measure.
p. 11341168
Received: 9 August 2013; in revised form: 18 December 2013 / Accepted: 11 February 2014 / Published: 24 February 2014
Show/Hide Abstract
 Cited by 1  PDF Fulltext (4517 KB)  HTML Fulltext  XML Fulltext
Abstract: We report on a comprehensive signal processing procedure for very low signal levels for the measurement of neutral deuterium in the local interstellar medium from a spacecraft in Earth orbit. The deuterium measurements were performed with the IBEXLo camera on NASA’s Interstellar Boundary Explorer (IBEX) satellite. Our analysis technique for these data consists of creating a mass relation in threedimensional time of flight space to accurately determine the position of the predicted D events, to precisely model the tail of the H events in the region where the H tail events are near the expected D events, and then to separate the H tail from the observations to extract the very faint D signal. This interstellar D signal, which is expected to be a few counts per year, is extracted from a strong terrestrial background signal, consisting of sputter products from the sensor’s conversion surface. As reference we accurately measure the terrestrial D/H ratio in these sputtered products and then discriminate this terrestrial background source. During the three years of the mission time when the deuterium signal was visible to IBEX, the observation geometry and orbit allowed for a total observation time of 115.3 days. Because of the spinning of the spacecraft and the stepping through eight energy channels the actual observing time of the interstellar wind was only 1.44 days. With the optimised data analysis we found three counts that could be attributed to interstellar deuterium. These results update our earlier work.
Review
p. 895911
Received: 24 September 2013; in revised form: 9 January 2014 / Accepted: 5 February 2014 / Published: 14 February 2014
Show/Hide Abstract
 Cited by 1  PDF Fulltext (847 KB)  HTML Fulltext  XML Fulltext
Abstract: The effects of metallurgical factors on the aqueous corrosion behavior of highentropy alloys (HEAs) are reviewed in this article. Alloying (e.g., Al and Cu) and processing (e.g., heat treatments) effects on the aqueous corrosion behavior of HEAs, including passive film formation, galvanic corrosion, and pitting corrosion, are discussed in detail. Corrosion rates of HEAs are calculated using electrochemical measurements and the weightloss method. Available experimental corrosion data of HEAs in two common solutions [sulfuric acid (0.5 M H_{2} SO_{4} ) and salt water (3.5 weight percent, wt.%, NaCl)], such as the corrosion potential (E_{corr} ), corrosion current density (i_{corr} ), pitting potential (E_{pit} ), and passive region (ΔE ), are summarized and compared with conventional corrosionresistant alloys. Possible directions of future work on the corrosion behavior of HEAs are suggested.
Other
p. 1122
Published: 21 February 2014
Show/Hide Abstract
 PDF Fulltext (18 KB)  HTML Fulltext  XML Fulltext
Abstract: The editors were made aware that a paper published in Entropy in 2004 [1] may have plagiarized an earlier paper by Roman Hric published in 2000 [2]. After checking with specialized plagiarism software, we found that this claim is indeed correct and almost the entire paper is a verbatim copy of the earlier one. After confirmation of this fact, the editors of Entropy have decided to retract the paper immediately. [...]
Select/unselect all
Displaying article 131
Export citation of selected articles as:
Plain Text
BibTeX
BibTeX (without abstracts)
Endnote
Endnote (without abstracts)
Tabdelimited
PubMed XML
DOAJ XML
AGRIS XML