Next Issue
Volume 18, August
Previous Issue
Volume 18, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 18, Issue 7 (July 2016) – 36 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
474 KiB  
Article
Symmetric Fractional Diffusion and Entropy Production
by Janett Prehl, Frank Boldt, Karl Heinz Hoffmann and Christopher Essex
Entropy 2016, 18(7), 275; https://doi.org/10.3390/e18070275 - 23 Jul 2016
Cited by 12 | Viewed by 5143
Abstract
The discovery of the entropy production paradox (Hoffmann et al., 1998) raised basic questions about the nature of irreversibility in the regime between diffusion and waves. First studied in the form of spatial movements of moments of H functions, pseudo propagation is the [...] Read more.
The discovery of the entropy production paradox (Hoffmann et al., 1998) raised basic questions about the nature of irreversibility in the regime between diffusion and waves. First studied in the form of spatial movements of moments of H functions, pseudo propagation is the pre-limit propagation-like movements of skewed probability density function (PDFs) in the domain between the wave and diffusion equations that goes over to classical partial differential equation propagation of characteristics in the wave limit. Many of the strange properties that occur in this extraordinary regime were thought to be connected in some manner to this form of proto-movement. This paper eliminates pseudo propagation by employing a similar evolution equation that imposes spatial unimodal symmetry on evolving PDFs. Contrary to initial expectations, familiar peculiarities emerge despite the imposed symmetry, but they have a distinct character. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Graphical abstract

287 KiB  
Article
Efficiency Bound of Local Z-Estimators on Discrete Sample Spaces
by Takafumi Kanamori
Entropy 2016, 18(7), 273; https://doi.org/10.3390/e18070273 - 23 Jul 2016
Cited by 2 | Viewed by 3837
Abstract
Many statistical models over a discrete sample space often face the computational difficulty of the normalization constant. Because of that, the maximum likelihood estimator does not work. In order to circumvent the computation difficulty, alternative estimators such as pseudo-likelihood and composite likelihood that [...] Read more.
Many statistical models over a discrete sample space often face the computational difficulty of the normalization constant. Because of that, the maximum likelihood estimator does not work. In order to circumvent the computation difficulty, alternative estimators such as pseudo-likelihood and composite likelihood that require only a local computation over the sample space have been proposed. In this paper, we present a theoretical analysis of such localized estimators. The asymptotic variance of localized estimators depends on the neighborhood system on the sample space. We investigate the relation between the neighborhood system and estimation accuracy of localized estimators. Moreover, we derive the efficiency bound. The theoretical results are applied to investigate the statistical properties of existing estimators and some extended ones. Full article
587 KiB  
Article
An Entropy-Based Kernel Learning Scheme toward Efficient Data Prediction in Cloud-Assisted Network Environments
by Xiong Luo, Ji Liu, Dandan Zhang, Weiping Wang and Yueqin Zhu
Entropy 2016, 18(7), 274; https://doi.org/10.3390/e18070274 - 22 Jul 2016
Cited by 4 | Viewed by 4469
Abstract
With the recent emergence of wireless sensor networks (WSNs) in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and [...] Read more.
With the recent emergence of wireless sensor networks (WSNs) in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and send data to sink node where end-users can query all the information and achieve cloud applications. Currently, one of the main disadvantages in the sensor nodes is that they are with limited physical performance relating to less memory for storage and less source of power. Therefore, in order to avoid such limitation, it is necessary to develop an efficient data prediction method in WSN. To serve this purpose, by reducing the redundant data transmission between sensor nodes and sink node while maintaining the required acceptable errors, this article proposes an entropy-based learning scheme for data prediction through the use of kernel least mean square (KLMS) algorithm. The proposed scheme called E-KLMS develops a mechanism to maintain the predicted data synchronous at both sides. Specifically, the kernel-based method is able to adjust the coefficients adaptively in accordance with every input, which will achieve a better performance with smaller prediction errors, while employing information entropy to remove these data which may cause relatively large errors. E-KLMS can effectively solve the tradeoff problem between prediction accuracy and computational efforts while greatly simplifying the training structure compared with some other data prediction approaches. What’s more, the kernel-based method and entropy technique could ensure the prediction effect by both improving the accuracy and reducing errors. Experiments with some real data sets have been carried out to validate the efficiency and effectiveness of E-KLMS learning scheme, and the experiment results show advantages of the our method in prediction accuracy and computational time. Full article
(This article belongs to the Special Issue Information Theoretic Learning)
Show Figures

Graphical abstract

4881 KiB  
Article
Thermal Characteristic Analysis and Experimental Study of a Spindle-Bearing System
by Li Wu and Qingchang Tan
Entropy 2016, 18(7), 271; https://doi.org/10.3390/e18070271 - 22 Jul 2016
Cited by 20 | Viewed by 8776
Abstract
In this paper, a thermo-mechanical coupling analysis model of the spindle-bearing system based on Hertz’s contact theory and a point contact non-Newtonian thermal elastohydrodynamic lubrication (EHL) theory are developed. In this model, the effect of preload, centrifugal force, the gyroscopic moment, and the [...] Read more.
In this paper, a thermo-mechanical coupling analysis model of the spindle-bearing system based on Hertz’s contact theory and a point contact non-Newtonian thermal elastohydrodynamic lubrication (EHL) theory are developed. In this model, the effect of preload, centrifugal force, the gyroscopic moment, and the lubrication state of the spindle-bearing system are considered. According to the heat transfer theory, the mathematical model for the temperature field of the spindle system is developed and the effect of the spindle cooling system on the spindle temperature distribution is analyzed. The theoretical simulations and the experimental results indicate that the bearing preload has great effect on the frictional heat generation; the cooling fluid has great effect on the heat balance of the spindle system. If a steady-state heat balance between the friction heat generation and the cooling system cannot be reached, thermally-induced preload will lead to a further increase of the frictional heat generation and then cause the thermal failure of the spindle. Full article
(This article belongs to the Special Issue Entropy Application in Tribology)
Show Figures

Graphical abstract

330 KiB  
Article
Toward Improved Understanding of the Physical Meaning of Entropy in Classical Thermodynamics
by Ben Akih-Kumgeh
Entropy 2016, 18(7), 270; https://doi.org/10.3390/e18070270 - 22 Jul 2016
Cited by 3 | Viewed by 6683
Abstract
The year 2015 marked the 150th anniversary of “entropy” as a concept in classical thermodynamics. Despite its central role in the mathematical formulation of the Second Law and most of classical thermodynamics, its physical meaning continues to be elusive and confusing. This is [...] Read more.
The year 2015 marked the 150th anniversary of “entropy” as a concept in classical thermodynamics. Despite its central role in the mathematical formulation of the Second Law and most of classical thermodynamics, its physical meaning continues to be elusive and confusing. This is especially true when we seek a reconstruction of the classical thermodynamics of a system from the statistical behavior of its constituent microscopic particles or vice versa. This paper sketches the classical definition by Clausius and offers a modified mathematical definition that is intended to improve its conceptual meaning. In the modified version, the differential of specific entropy appears as a non-dimensional energy term that captures the invigoration or reduction of microscopic motion upon addition or withdrawal of heat from the system. It is also argued that heat transfer is a better model process to illustrate entropy; the canonical heat engines and refrigerators often used to illustrate this concept are not very relevant to new areas of thermodynamics (e.g., thermodynamics of biological systems). It is emphasized that entropy changes, as invoked in the Second Law, are necessarily related to the non-equilibrium interactions of two or more systems that might have initially been in thermal equilibrium but at different temperatures. The overall direction of entropy increase indicates the direction of naturally occurring heat transfer processes in an isolated system that consists of internally interacting (non-isolated) sub systems. We discuss the implication of the proposed modification on statements of the Second Law, interpretation of entropy in statistical thermodynamics, and the Third Law. Full article
Show Figures

Figure 1

9040 KiB  
Article
Mechanothermodynamic Entropy and Analysis of Damage State of Complex Systems
by Leonid A. Sosnovskiy and Sergei S. Sherbakov
Entropy 2016, 18(7), 268; https://doi.org/10.3390/e18070268 - 20 Jul 2016
Cited by 55 | Viewed by 7967
Abstract
Mechanics from its side and thermodynamics from its side consider evolution of complex systems, including the Universe. Created classical thermodynamic theory of evolution has one important drawback since it predicts an inevitable heat death of the Universe which is unlikely to take place [...] Read more.
Mechanics from its side and thermodynamics from its side consider evolution of complex systems, including the Universe. Created classical thermodynamic theory of evolution has one important drawback since it predicts an inevitable heat death of the Universe which is unlikely to take place according to the modern perceptions. The attempts to create a generalized theory of evolution in mechanics were unsuccessful since mechanical equations do not discriminate between future and past. It is natural that the union of mechanics and thermodynamics was difficult to realize since they are based on different methodology. We make an attempt to propose a generalized theory of evolution which is based on the concept of tribo-fatigue entropy. Essence of the proposed approach is that tribo-fatigue entropy is determined by the processes of damageability conditioned by thermodynamic and mechanical effects causing to the change of states of any systems. Law of entropy increase is formulated analytically in the general form. Mechanothermodynamical function is constructed for specific case of fatigue damage of materials due to variation of temperature from 3 K to 0.8 of melting temperature basing on the analysis of 136 experimental results. Full article
(This article belongs to the Special Issue Exploring the Second Law of Thermodynamics)
Show Figures

Graphical abstract

583 KiB  
Article
Novel Criteria for Deterministic Remote State Preparation via the Entangled Six-Qubit State
by Gang Xu, Xiu-Bo Chen, Zhao Dou, Jing Li, Xin Liu and Zongpeng Li
Entropy 2016, 18(7), 267; https://doi.org/10.3390/e18070267 - 20 Jul 2016
Cited by 20 | Viewed by 4325
Abstract
In this paper, our concern is to design some criteria for deterministic remote state preparation for preparing an arbitrary three-particle state via a genuinely entangled six-qubit state. First, we put forward two schemes in both the real and complex Hilbert space, respectively. Using [...] Read more.
In this paper, our concern is to design some criteria for deterministic remote state preparation for preparing an arbitrary three-particle state via a genuinely entangled six-qubit state. First, we put forward two schemes in both the real and complex Hilbert space, respectively. Using an appropriate set of eight-qubit measurement basis, the remote three-qubit preparation is completed with unit success probability. Departing from previous research, our protocol has a salient feature in that the serviceable measurement basis only contains the initial coefficients and their conjugate values. By utilizing the permutation group, it is convenient to provide the permutation relationship between coefficients. Second, our ideas and methods can also be generalized to the situation of preparing an arbitrary N-particle state in complex case by taking advantage of Bell states as quantum resources. More importantly, criteria satisfied conditions for preparation with 100% success probability in complex Hilbert space is summarized. Third, the classical communication costs of our scheme are calculated to determine the classical recourses required. It is also worth mentioning that our protocol has higher efficiency and lower resource costs compared with the other papers. Full article
(This article belongs to the Special Issue Quantum Information 2016)
Show Figures

Graphical abstract

4214 KiB  
Article
Complex Dynamics of a Continuous Bertrand Duopoly Game Model with Two-Stage Delay
by Junhai Ma and Fengshan Si
Entropy 2016, 18(7), 266; https://doi.org/10.3390/e18070266 - 20 Jul 2016
Cited by 47 | Viewed by 5090
Abstract
This paper studies a continuous Bertrand duopoly game model with two-stage delay. Our aim is to investigate the influence of delay and weight on the complex dynamic characteristics of the system. We obtain the bifurcation point of the system respect to delay parameter [...] Read more.
This paper studies a continuous Bertrand duopoly game model with two-stage delay. Our aim is to investigate the influence of delay and weight on the complex dynamic characteristics of the system. We obtain the bifurcation point of the system respect to delay parameter by calculating. In addition, the dynamic properties of the system are simulated by power spectrum, attractor, bifurcation diagram, the largest Lyapunov exponent, 3D surface chart, 4D Cubic Chart, 2D parameter bifurcation diagram, and 3D parameter bifurcation diagram. The results show that the stability of the system depends on the delay and weight, in order to maintain stability of price and ensure the firm profit, the firms must control the parameters in the reasonable region. Otherwise, the system will lose stability, and even into chaos, which will cause fluctuations in prices, the firms cannot be profitable. Finally, the chaos control of the system is carried out by a control strategy of the state variables’ feedback and parameter variation, which effectively avoid the damage of chaos to the economic system. Therefore, the results of this study have an important practical significance to make decisions with multi-stage delay for oligopoly firms. Full article
Show Figures

Graphical abstract

2644 KiB  
Article
Noise Suppression in 94 GHz Radar-Detected Speech Based on Perceptual Wavelet Packet
by Fuming Chen, Chuantao Li, Qiang An, Fulai Liang, Fugui Qi, Sheng Li and Jianqi Wang
Entropy 2016, 18(7), 265; https://doi.org/10.3390/e18070265 - 19 Jul 2016
Cited by 10 | Viewed by 4624
Abstract
A millimeter wave (MMW) radar sensor is employed in our laboratory to detect human speech because it provides a new non-contact speech acquisition method that is suitable for various applications. However, the speech detected by the radar sensor is often degraded by combined [...] Read more.
A millimeter wave (MMW) radar sensor is employed in our laboratory to detect human speech because it provides a new non-contact speech acquisition method that is suitable for various applications. However, the speech detected by the radar sensor is often degraded by combined noise. This paper proposes a new perceptual wavelet packet method that is able to enhance the speech acquired using a 94 GHz MMW radar system by suppressing the noise. The process is as follows. First, the radar speech signal is decomposed using a perceptual wavelet packet. Then, an adaptive wavelet threshold and new modified thresholding function are employed to remove the noise from the detected speech. The results obtained from the speech spectrograms, listening tests and objective evaluation show that the new method significantly improves the performance of the detected speech. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Show Figures

Graphical abstract

229 KiB  
Article
The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas
by Jesús E. García, Verónica A. González-López and Roger B. Nelsen
Entropy 2016, 18(7), 264; https://doi.org/10.3390/e18070264 - 19 Jul 2016
Cited by 3 | Viewed by 4570
Abstract
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy [...] Read more.
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004), and we also show that each copula in that class is a maximum entropy copula. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
282 KiB  
Article
Positive Sofic Entropy Implies Finite Stabilizer
by Tom Meyerovitch
Entropy 2016, 18(7), 263; https://doi.org/10.3390/e18070263 - 18 Jul 2016
Cited by 6 | Viewed by 3648
Abstract
We prove that, for a measure preserving action of a sofic group with positive sofic entropy, the stabilizer is finite on a set of positive measures. This extends the results of Weiss and Seward for amenable groups and free groups, respectively. It follows [...] Read more.
We prove that, for a measure preserving action of a sofic group with positive sofic entropy, the stabilizer is finite on a set of positive measures. This extends the results of Weiss and Seward for amenable groups and free groups, respectively. It follows that the action of a sofic group on its subgroups by inner automorphisms has zero topological sofic entropy, and that a faithful action that has completely positive sofic entropy must be free. Full article
(This article belongs to the Special Issue Entropic Properties of Dynamical Systems)
278 KiB  
Article
Greedy Algorithms for Optimal Distribution Approximation
by Bernhard C. Geiger and Georg Böcherer
Entropy 2016, 18(7), 262; https://doi.org/10.3390/e18070262 - 18 Jul 2016
Cited by 2 | Viewed by 4377
Abstract
The approximation of a discrete probability distribution t by an M-type distribution p is considered. The approximation error is measured by the informational divergence D ( t p ) , which is an appropriate measure, e.g., in the context of data [...] Read more.
The approximation of a discrete probability distribution t by an M-type distribution p is considered. The approximation error is measured by the informational divergence D ( t p ) , which is an appropriate measure, e.g., in the context of data compression. Properties of the optimal approximation are derived and bounds on the approximation error are presented, which are asymptotically tight. A greedy algorithm is proposed that solves this M-type approximation problem optimally. Finally, it is shown that different instantiations of this algorithm minimize the informational divergence D ( p t ) or the variational distance p t 1 . Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

1193 KiB  
Review
Three Strategies for the Design of Advanced High-Entropy Alloys
by Ming-Hung Tsai
Entropy 2016, 18(7), 252; https://doi.org/10.3390/e18070252 - 15 Jul 2016
Cited by 65 | Viewed by 11278
Abstract
High-entropy alloys (HEAs) have recently become a vibrant field of study in the metallic materials area. In the early years, the design of HEAs was more of an exploratory nature. The selection of compositions was somewhat arbitrary, and there was typically no specific [...] Read more.
High-entropy alloys (HEAs) have recently become a vibrant field of study in the metallic materials area. In the early years, the design of HEAs was more of an exploratory nature. The selection of compositions was somewhat arbitrary, and there was typically no specific goal to be achieved in the design. Very recently, however, the development of HEAs has gradually entered a different stage. Unlike the early alloys, HEAs developed nowadays are usually designed to meet clear goals, and have carefully chosen components, deliberately introduced multiple phases, and tailored microstructures. These alloys are referred to as advanced HEAs. In this paper, the progress in advanced HEAs is briefly reviewed. The design strategies for these materials are examined and are classified into three categories. Representative works in each category are presented. Finally, important issues and future directions in the development of advanced HEAs are pointed out and discussed. Full article
(This article belongs to the Special Issue High-Entropy Alloys and High-Entropy-Related Materials)
Show Figures

Figure 1

361 KiB  
Article
Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices
by Luis L. Bonilla and Manuel Carretero
Entropy 2016, 18(7), 260; https://doi.org/10.3390/e18070260 - 14 Jul 2016
Viewed by 4026
Abstract
Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy [...] Read more.
Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed. Full article
(This article belongs to the Special Issue Maximum Entropy Principle and Semiconductors)
Show Figures

Graphical abstract

385 KiB  
Article
Coupled Thermoelectric Devices: Theory and Experiment
by Jaziel A. Rojas, Iván Rivera, Aldo Figueroa and Federico Vázquez
Entropy 2016, 18(7), 255; https://doi.org/10.3390/e18070255 - 14 Jul 2016
Cited by 4 | Viewed by 4748
Abstract
In this paper, we address theoretically and experimentally the optimization problem of the heat transfer occurring in two coupled thermoelectric devices. A simple experimental set up is used. The optimization parameters are the applied electric currents. When one thermoelectric is analysed, the temperature [...] Read more.
In this paper, we address theoretically and experimentally the optimization problem of the heat transfer occurring in two coupled thermoelectric devices. A simple experimental set up is used. The optimization parameters are the applied electric currents. When one thermoelectric is analysed, the temperature difference Δ T between the thermoelectric boundaries shows a parabolic profile with respect to the applied electric current. This behaviour agrees qualitatively with the corresponding experimental measurement. The global entropy generation shows a monotonous increase with the electric current. In the case of two coupled thermoelectric devices, elliptic isocontours for Δ T are obtained in applying an electric current through each of the thermoelectrics. The isocontours also fit well with measurements. Optimal figure of merit is found for a specific set of values of the applied electric currents. The entropy generation-thermal figure of merit relationship is studied. It is shown that, given a value of the thermal figure of merit, the device can be operated in a state of minimum entropy production. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Show Figures

Graphical abstract

255 KiB  
Article
Ensemble Equivalence for Distinguishable Particles
by Antonio Fernández-Peralta and Raúl Toral
Entropy 2016, 18(7), 259; https://doi.org/10.3390/e18070259 - 13 Jul 2016
Cited by 2 | Viewed by 5444
Abstract
Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of [...] Read more.
Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive. Full article
(This article belongs to the Section Statistical Physics)
3612 KiB  
Article
Structures in Sound: Analysis of Classical Music Using the Information Length
by Schuyler Nicholson and Eun-jin Kim
Entropy 2016, 18(7), 258; https://doi.org/10.3390/e18070258 - 13 Jul 2016
Cited by 18 | Viewed by 6823
Abstract
We show that music is represented by fluctuations away from the minimum path through statistical space. Our key idea is to envision music as the evolution of a non-equilibrium system and to construct probability distribution functions (PDFs) from musical instrument digital interface (MIDI) [...] Read more.
We show that music is represented by fluctuations away from the minimum path through statistical space. Our key idea is to envision music as the evolution of a non-equilibrium system and to construct probability distribution functions (PDFs) from musical instrument digital interface (MIDI) files of classical compositions. Classical music is then viewed through the lens of generalized position and velocity, based on the Fisher metric. Through these statistical tools we discuss a way to quantitatively discriminate between music and noise. Full article
(This article belongs to the Special Issue Applications of Fisher Information in Sciences)
Show Figures

Graphical abstract

2885 KiB  
Article
Using Wearable Accelerometers in a Community Service Context to Categorize Falling Behavior
by Chia-Hsuan Lee, Tien-Lung Sun, Bernard C. Jiang and Victor Ham Choi
Entropy 2016, 18(7), 257; https://doi.org/10.3390/e18070257 - 13 Jul 2016
Cited by 13 | Viewed by 6575
Abstract
In this paper, the Multiscale Entropy (MSE) analysis of acceleration data collected from a wearable inertial sensor was compared with other features reported in the literature to observe falling behavior from the acceleration data, and traditional clinical scales to evaluate falling behavior. We [...] Read more.
In this paper, the Multiscale Entropy (MSE) analysis of acceleration data collected from a wearable inertial sensor was compared with other features reported in the literature to observe falling behavior from the acceleration data, and traditional clinical scales to evaluate falling behavior. We use a fall risk assessment over a four-month period to examine >65 year old participants in a community service context using simple clinical tests, including the Short Form Berg Balance Scale (SFBBS), Timed Up and Go test (TUG), and the Short Portable Mental Status Questionnaire (SPMSQ), with wearable accelerometers for the TUG test. We classified participants into fallers and non-fallers to (1) compare the features extracted from the accelerometers and (2) categorize fall risk using statistics from TUG test results. Combined, TUG and SFBBS results revealed defining features were test time, Slope(A) and slope(B) in Sit(A)-to-stand(B), and range(A) and slope(B) in Stand(B)-to-sit(A). Of (1) SPMSQ; (2) TUG and SPMSQ; and (3) BBS and SPMSQ results, only range(A) in Stand(B)-to-sit(A) was a defining feature. From MSE indicators, we found that whether in the X, Y or Z direction, TUG, BBS, and the combined TUG and SFBBS are all distinguishable, showing that MSE can effectively classify participants in these clinical tests using behavioral actions. This study highlights the advantages of body-worn sensors as ordinary and low cost tools available outside the laboratory. The results indicated that MSE analysis of acceleration data can be used as an effective metric to categorize falling behavior of community-dwelling elderly. In addition to clinical application, (1) our approach requires no expert physical therapist, nurse, or doctor for evaluations and (2) fallers can be categorized irrespective of the critical value from clinical tests. Full article
(This article belongs to the Special Issue Multiscale Entropy and Its Applications in Medicine and Biology)
Show Figures

Graphical abstract

365 KiB  
Article
The Logical Consistency of Simultaneous Agnostic Hypothesis Tests
by Luís G. Esteves, Rafael Izbicki, Julio M. Stern and Rafael B. Stern
Entropy 2016, 18(7), 256; https://doi.org/10.3390/e18070256 - 13 Jul 2016
Cited by 14 | Viewed by 6254
Abstract
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such [...] Read more.
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical requirements. However, Izbicki and Esteves shows that the only tests that are in accordance with three logical requirements (monotonicity, invertibility and consonance) are trivial tests based on point estimation, which generally lack statistical optimality. As a possible solution to this dilemma, this paper adapts the above logical requirements to agnostic tests, in which one can accept, reject or remain agnostic with respect to a given hypothesis. Each of the logical requirements is characterized in terms of a Bayesian decision theoretic perspective. Contrary to the results obtained for regular hypothesis tests, there exist agnostic tests that satisfy all logical requirements and also perform well statistically. In particular, agnostic tests that fulfill all logical requirements are characterized as region estimator-based tests. Examples of such tests are provided. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

293 KiB  
Article
Link between Lie Group Statistical Mechanics and Thermodynamics of Continua
by Géry De Saxcé
Entropy 2016, 18(7), 254; https://doi.org/10.3390/e18070254 - 12 Jul 2016
Cited by 12 | Viewed by 4557
Abstract
In this work, we consider the value of the momentum map of the symplectic mechanics as an affine tensor called momentum tensor. From this point of view, we analyze the underlying geometric structure of the theories of Lie group statistical mechanics and relativistic [...] Read more.
In this work, we consider the value of the momentum map of the symplectic mechanics as an affine tensor called momentum tensor. From this point of view, we analyze the underlying geometric structure of the theories of Lie group statistical mechanics and relativistic thermodynamics of continua, formulated by Souriau independently of each other. We bridge the gap between them in the classical Galilean context. These geometric structures of the thermodynamics are rich and we think they might be a source of inspiration for the geometric theory of information based on the concept of entropy. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics)
2176 KiB  
Article
The Use of Denoising and Analysis of the Acoustic Signal Entropy in Diagnosing Engine Valve Clearance
by Tomasz Figlus, Jozef Gnap, Tomáš Skrúcaný, Branislav Šarkan and Jozef Stoklosa
Entropy 2016, 18(7), 253; https://doi.org/10.3390/e18070253 - 12 Jul 2016
Cited by 27 | Viewed by 5086
Abstract
The paper presents a method for processing acoustic signals which allows the extraction, from a very noisy signal, of components which contain diagnostically useful information on the increased valve clearance of a combustion engine. This method used two-stage denoising of the acoustic signal [...] Read more.
The paper presents a method for processing acoustic signals which allows the extraction, from a very noisy signal, of components which contain diagnostically useful information on the increased valve clearance of a combustion engine. This method used two-stage denoising of the acoustic signal performed by means of a discrete wavelet transform. Afterwards, based on the signal cleaned-up in this manner, its entropy was calculated as a quantitative measure of qualitative changes caused by the excessive clearance. The testing and processing of the actual acoustic signal of a combustion engine enabled clear extraction of components which contain information on the valve clearance being diagnosed. Full article
Show Figures

Graphical abstract

867 KiB  
Article
Effect of a Percutaneous Coronary Intervention Procedure on Heart Rate Variability and Pulse Transit Time Variability: A Comparison Study Based on Fuzzy Measure Entropy
by Guang Zhang, Chengyu Liu, Lizhen Ji, Jing Yang and Changchun Liu
Entropy 2016, 18(7), 246; https://doi.org/10.3390/e18070246 - 09 Jul 2016
Cited by 2 | Viewed by 6064
Abstract
Percutaneous coronary intervention (PCI) is a common treatment method for patients with coronary artery disease (CAD), but its effect on synchronously measured heart rate variability (HRV) and pulse transit time variability (PTTV) have not been well established. This study aimed to verify whether [...] Read more.
Percutaneous coronary intervention (PCI) is a common treatment method for patients with coronary artery disease (CAD), but its effect on synchronously measured heart rate variability (HRV) and pulse transit time variability (PTTV) have not been well established. This study aimed to verify whether PCI for CAD patients affects both HRV and PTTV parameters. Sixteen CAD patients were enrolled. Two five-minute ECG and finger photoplethysmography (PPG) signals were recorded, one within 24 h before PCI and another within 24 h after PCI. The changes of RR and pulse transit time (PTT) intervals due to the PCI procedure were first compared. Then, HRV and PTTV were evaluated by a standard short-term time-domain variability index of standard deviation of time series (SDTS) and our previously developed entropy-based index of fuzzy measure entropy (FuzzyMEn). To test the effect of different time series length on HRV and PTTV results, we segmented the RR and PTT time series using four time windows of 200, 100, 50 and 25 beats respectively. The PCI-induced changes in HRV and PTTV, as well as in RR and PTT intervals, are different. PCI procedure significantly decreased RR intervals (before PCI 973 ± 85 vs. after PCI 907 ± 100 ms, p < 0.05) while significantly increasing PTT intervals (207 ± 18 vs. 214 ± 19 ms, p < 0.01). For HRV, SDTS-only output significant lower values after PCI when time windows are 100 and 25 beats while presenting no significant decreases for other two time windows. By contrast, FuzzyMEn gave significant lower values after PCI for all four time windows (all p < 0.05). For PTTV, SDTS hardly changed after PCI at any time window (all p > 0.90) whereas FuzzyMEn still reported significant lower values (p < 0.05 for 25 beats time window and p < 0.01 for other three time windows). For both HRV and PTTV, with the increase of time window values, SDTS decreased while FuzzyMEn increased. This pilot study demonstrated that the RR interval decreased whereas the PTT interval increased after the PCI procedure and that there were significant reductions in both HRV and PTTV immediately after PCI using the FuzzyMEn method, indicating the changes in underlying mechanisms in cardiovascular system. Full article
(This article belongs to the Special Issue Entropy on Biosignals and Intelligent Systems)
Show Figures

Graphical abstract

1674 KiB  
Article
Maximum Entropy Learning with Deep Belief Networks
by Payton Lin, Szu-Wei Fu, Syu-Siang Wang, Ying-Hui Lai and Yu Tsao
Entropy 2016, 18(7), 251; https://doi.org/10.3390/e18070251 - 08 Jul 2016
Cited by 11 | Viewed by 14117
Abstract
Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN). We present a maximum entropy (ME) learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more [...] Read more.
Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN). We present a maximum entropy (ME) learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited. Full article
(This article belongs to the Special Issue Information Theoretic Learning)
Show Figures

Graphical abstract

8337 KiB  
Article
Modeling Fluid’s Dynamics with Master Equations in Ultrametric Spaces Representing the Treelike Structure of Capillary Networks
by Andrei Khrennikov, Klaudia Oleschko and María De Jesús Correa López
Entropy 2016, 18(7), 249; https://doi.org/10.3390/e18070249 - 07 Jul 2016
Cited by 37 | Viewed by 6387
Abstract
We present a new conceptual approach for modeling of fluid flows in random porous media based on explicit exploration of the treelike geometry of complex capillary networks. Such patterns can be represented mathematically as ultrametric spaces and the dynamics of fluids by ultrametric [...] Read more.
We present a new conceptual approach for modeling of fluid flows in random porous media based on explicit exploration of the treelike geometry of complex capillary networks. Such patterns can be represented mathematically as ultrametric spaces and the dynamics of fluids by ultrametric diffusion. The images of p-adic fields, extracted from the real multiscale rock samples and from some reference images, are depicted. In this model the porous background is treated as the environment contributing to the coefficients of evolutionary equations. For the simplest trees, these equations are essentially less complicated than those with fractional differential operators which are commonly applied in geological studies looking for some fractional analogs to conventional Euclidean space but with anomalous scaling and diffusion properties. It is possible to solve the former equation analytically and, in particular, to find stationary solutions. The main aim of this paper is to attract the attention of researchers working on modeling of geological processes to the novel utrametric approach and to show some examples from the petroleum reservoir static and dynamic characterization, able to integrate the p-adic approach with multifractals, thermodynamics and scaling. We also present a non-mathematician friendly review of trees and ultrametric spaces and pseudo-differential operators on such spaces. Full article
Show Figures

Graphical abstract

983 KiB  
Article
Thermoeconomic Coherence: A Methodology for the Analysis and Optimisation of Thermal Systems
by Antonio Rovira, José María Martínez-Val and Manuel Valdés
Entropy 2016, 18(7), 250; https://doi.org/10.3390/e18070250 - 05 Jul 2016
Cited by 3 | Viewed by 4613
Abstract
In the field of thermal systems, different approaches and methodologies have been proposed to merge thermodynamics and economics. They are usually referred as thermoeconomic methodologies and their objective is to find the optimum design of the thermal system given a specific objective function. [...] Read more.
In the field of thermal systems, different approaches and methodologies have been proposed to merge thermodynamics and economics. They are usually referred as thermoeconomic methodologies and their objective is to find the optimum design of the thermal system given a specific objective function. Some thermoeconomic analyses go beyond that objective and attempt to find whether every component of the system is correctly designed or to quantify the inefficiencies of the components in economic terms. This paper takes another step in that direction and presents a new methodology to measure the thermoeconomic coherence of thermal systems, as well as the contribution of each parameter of the system to that coherence. It is based on the equality of marginal costs in the optimum. The methodology establishes a criterion to design coherently the system. Additionally, it may be used to evaluate how much a specific design is far from the optimum, which components are undersized or oversized and to measure the strength of the restrictions of the system. Finally, it may be extended to the analysis of uncertainties of the design process, providing a coherent design and sizing of the components with high uncertainties. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Show Figures

Graphical abstract

1100 KiB  
Article
Cumulative Paired φ-Entropy
by Ingo Klein, Benedikt Mangold and Monika Doll
Entropy 2016, 18(7), 248; https://doi.org/10.3390/e18070248 - 01 Jul 2016
Cited by 21 | Viewed by 7678
Abstract
A new kind of entropy will be introduced which generalizes both the differential entropy and the cumulative (residual) entropy. The generalization is twofold. First, we simultaneously define the entropy for cumulative distribution functions (cdfs) and survivor functions (sfs), instead of defining it separately [...] Read more.
A new kind of entropy will be introduced which generalizes both the differential entropy and the cumulative (residual) entropy. The generalization is twofold. First, we simultaneously define the entropy for cumulative distribution functions (cdfs) and survivor functions (sfs), instead of defining it separately for densities, cdfs, or sfs. Secondly, we consider a general “entropy generating function” φ, the same way Burbea et al. (IEEE Trans. Inf. Theory 1982, 28, 489–495) and Liese et al. (Convex Statistical Distances; Teubner-Verlag, 1987) did in the context of φ-divergences. Combining the ideas of φ-entropy and cumulative entropy leads to the new “cumulative paired φ-entropy” ( C P E φ ). This new entropy has already been discussed in at least four scientific disciplines, be it with certain modifications or simplifications. In the fuzzy set theory, for example, cumulative paired φ-entropies were defined for membership functions, whereas in uncertainty and reliability theories some variations of C P E φ were recently considered as measures of information. With a single exception, the discussions in the scientific disciplines appear to be held independently of each other. We consider C P E φ for continuous cdfs and show that C P E φ is rather a measure of dispersion than a measure of information. In the first place, this will be demonstrated by deriving an upper bound which is determined by the standard deviation and by solving the maximum entropy problem under the restriction of a fixed variance. Next, this paper specifically shows that C P E φ satisfies the axioms of a dispersion measure. The corresponding dispersion functional can easily be estimated by an L-estimator, containing all its known asymptotic properties. C P E φ is the basis for several related concepts like mutual φ-information, φ-correlation, and φ-regression, which generalize Gini correlation and Gini regression. In addition, linear rank tests for scale that are based on the new entropy have been developed. We show that almost all known linear rank tests are special cases, and we introduce certain new tests. Moreover, formulas for different distributions and entropy calculations are presented for C P E φ if the cdf is available in a closed form. Full article
Show Figures

Graphical abstract

890 KiB  
Review
Entropy? Honest!
by Tommaso Toffoli
Entropy 2016, 18(7), 247; https://doi.org/10.3390/e18070247 - 30 Jun 2016
Cited by 7 | Viewed by 7798
Abstract
Here we deconstruct, and then in a reasoned way reconstruct, the concept of “entropy of a system”, paying particular attention to where the randomness may be coming from. We start with the core concept of entropy as a count associated with a description [...] Read more.
Here we deconstruct, and then in a reasoned way reconstruct, the concept of “entropy of a system”, paying particular attention to where the randomness may be coming from. We start with the core concept of entropy as a count associated with a description; this count (traditionally expressed in logarithmic form for a number of good reasons) is in essence the number of possibilities—specific instances or “scenarios”—that match that description. Very natural (and virtually inescapable) generalizations of the idea of description are the probability distribution and its quantum mechanical counterpart, the density operator. We track the process of dynamically updating entropy as a system evolves. Three factors may cause entropy to change: (1) the system’s internal dynamics; (2) unsolicited external influences on it; and (3) the approximations one has to make when one tries to predict the system’s future state. The latter task is usually hampered by hard-to-quantify aspects of the original description, limited data storage and processing resource, and possibly algorithmic inadequacy. Factors 2 and 3 introduce randomness—often huge amounts of it—into one’s predictions and accordingly degrade them. When forecasting, as long as the entropy bookkeping is conducted in an honest fashion, this degradation will always lead to an entropy increase. To clarify the above point we introduce the notion of honest entropy, which coalesces much of what is of course already done, often tacitly, in responsible entropy-bookkeping practice. This notion—we believe—will help to fill an expressivity gap in scientific discourse. With its help, we shall prove that any dynamical system—not just our physical universe—strictly obeys Clausius’s original formulation of the second law of thermodynamics if and only if it is invertible. Thus this law is a tautological property of invertible systems! Full article
Show Figures

Figure 1

3729 KiB  
Article
Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map
by Sen Han, Huihui Bai and Mengmeng Zhang
Entropy 2016, 18(7), 245; https://doi.org/10.3390/e18070245 - 29 Jun 2016
Cited by 2 | Viewed by 4009
Abstract
Multiple description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing [...] Read more.
Multiple description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform) domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance. Full article
Show Figures

Graphical abstract

1147 KiB  
Article
Multiatom Quantum Coherences in Micromasers as Fuel for Thermal and Nonthermal Machines
by Ceren B. Dağ, Wolfgang Niedenzu, Özgür E. Müstecaplıoğlu and Gershon Kurizki
Entropy 2016, 18(7), 244; https://doi.org/10.3390/e18070244 - 29 Jun 2016
Cited by 81 | Viewed by 8395
Abstract
In this paper, we address the question: To what extent is the quantum state preparation of multiatom clusters (before they are injected into the microwave cavity) instrumental for determining not only the kind of machine we may operate, but also the quantitative bounds [...] Read more.
In this paper, we address the question: To what extent is the quantum state preparation of multiatom clusters (before they are injected into the microwave cavity) instrumental for determining not only the kind of machine we may operate, but also the quantitative bounds of its performance? Figuratively speaking, if the multiatom cluster is the “crude oil”, the question is: Which preparation of the cluster is the refining process that can deliver a “gasoline” with a “specific octane”? We classify coherences or quantum correlations among the atoms according to their ability to serve as: (i) fuel for nonthermal machines corresponding to atomic states whose coherences displace or squeeze the cavity field, as well as cause its heating; and (ii) fuel that is purely “combustible”, i.e., corresponds to atomic states that only allow for heat and entropy exchange with the field and can energize a proper heat engine. We identify highly promising multiatom states for each kind of fuel and propose viable experimental schemes for their implementation. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Show Figures

Graphical abstract

675 KiB  
Article
Nonlinear Thermodynamic Analysis and Optimization of a Carnot Engine Cycle
by Michel Feidt, Monica Costea, Stoian Petrescu and Camelia Stanciu
Entropy 2016, 18(7), 243; https://doi.org/10.3390/e18070243 - 28 Jun 2016
Cited by 17 | Viewed by 5337
Abstract
As part of the efforts to unify the various branches of Irreversible Thermodynamics, the proposed work reconsiders the approach of the Carnot engine taking into account the finite physical dimensions (heat transfer conductances) and the finite speed of the piston. The models introduce [...] Read more.
As part of the efforts to unify the various branches of Irreversible Thermodynamics, the proposed work reconsiders the approach of the Carnot engine taking into account the finite physical dimensions (heat transfer conductances) and the finite speed of the piston. The models introduce the irreversibility of the engine by two methods involving different constraints. The first method introduces the irreversibility by a so-called irreversibility ratio in the entropy balance applied to the cycle, while in the second method it is emphasized by the entropy generation rate. Various forms of heat transfer laws are analyzed, but most of the results are given for the case of the linear law. Also, individual cases are studied and reported in order to provide a simple analytical form of the results. The engine model developed allowed a formal optimization using the calculus of variations. Full article
(This article belongs to the Special Issue Exploring the Second Law of Thermodynamics)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop