Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 13, Issue 7 (July 2011), Pages 1212-1424

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-9
Export citation of selected articles as:

Research

Open AccessArticle Application of the EGM Method to a LED-Based Spotlight: A Constrained Pseudo-Optimization Design Process Based on the Analysis of the Local Entropy Generation Maps
Entropy 2011, 13(7), 1212-1228; doi:10.3390/e13071212
Received: 22 April 2011 / Revised: 5 June 2011 / Accepted: 15 June 2011 / Published: 27 June 2011
Cited by 3 | PDF Full-text (5560 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the entropy generation minimization (EGM) method is applied to an industrial heat transfer problem: the forced convective cooling of a LED-based spotlight. The design specification calls for eighteen diodes arranged on a circular copper plate of 35 mm diameter. [...] Read more.
In this paper, the entropy generation minimization (EGM) method is applied to an industrial heat transfer problem: the forced convective cooling of a LED-based spotlight. The design specification calls for eighteen diodes arranged on a circular copper plate of 35 mm diameter. Every diode dissipates 3 W and the maximum allowedtemperature of the plate is 80 °C. The cooling relies on the forced convection driven by a jet of air impinging on the plate. An initial complex geometry of plate fins is presented and analyzed with a commercial CFD code that computes the entropy generation rate. A pseudo-optimization process is carried out via a successive series of design modifications based on a careful analysis of the entropy generation maps. One of the advantages of the EGM method is that the rationale behind each step of the design process can be justified on a physical basis. It is found that the best performance is attained when the fins are periodically spaced in the radial direction. Full article
(This article belongs to the Special Issue Entropy Generation Minimization)
Figures

Open AccessArticle On Accuracy of PDF Divergence Estimators and Their Applicability to Representative Data Sampling
Entropy 2011, 13(7), 1229-1266; doi:10.3390/e13071229
Received: 28 May 2011 / Accepted: 2 July 2011 / Published: 8 July 2011
Cited by 8 | PDF Full-text (4212 KB)
Abstract
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the [...] Read more.
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This paper investigates the possibility of using various probability density function divergence measures for the purpose of representative data sampling. As it turned out, the first difficulty one needs to deal with is estimation of the divergence itself. In contrast to other publications on this subject, the experimental results provided in this study show that in many cases it is not possible unless samples consisting of thousands of instances are used. Exhaustive experiments on the divergence guided representative data sampling have been performed using 26 publicly available benchmark datasets and 70 PDF divergence estimators, and their results have been analysed and discussed. Full article
Open AccessArticle Tsallis-Based Nonextensive Analysis of the Southern California Seismicity
Entropy 2011, 13(7), 1267-1280; doi:10.3390/e13071267
Received: 6 June 2011 / Revised: 21 June 2011 / Accepted: 6 July 2011 / Published: 11 July 2011
Cited by 20 | PDF Full-text (8015 KB) | HTML Full-text | XML Full-text
Abstract
Nonextensive statistics has been becoming a very useful tool to describe the complexity of dynamic systems. Recently, analysis of the magnitude distribution of earthquakes has been increasingly used in the context of nonextensivity. In the present paper, the nonextensive analysis of the [...] Read more.
Nonextensive statistics has been becoming a very useful tool to describe the complexity of dynamic systems. Recently, analysis of the magnitude distribution of earthquakes has been increasingly used in the context of nonextensivity. In the present paper, the nonextensive analysis of the southern California earthquake catalog was performed. The results show that the nonextensivity parameter q lies in the same range as obtained for other different seismic areas, thus suggesting a sort of universal character in the nonextensive interpretation of seismicity. Full article
(This article belongs to the Special Issue Tsallis Entropy)
Open AccessArticle Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches
Entropy 2011, 13(7), 1281-1304; doi:10.3390/e13071281
Received: 20 April 2011 / Revised: 23 May 2011 / Accepted: 23 June 2011 / Published: 18 July 2011
PDF Full-text (1536 KB)
Abstract
Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian) network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME) are [...] Read more.
Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian) network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME) are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples. Full article
Open AccessArticle State Operator Correspondence and Entanglement in AdS2/CFT1
Entropy 2011, 13(7), 1305-1323; doi:10.3390/e13071305
Received: 13 June 2011 / Accepted: 22 June 2011 / Published: 19 July 2011
Cited by 47 | PDF Full-text (155 KB) | HTML Full-text | XML Full-text
Abstract
Since Euclidean global AdS2 space represented as a strip has two boundaries, the state-operator correspondence in the dual CFT1 reduces to the standard map from the operators acting on a single copy of the Hilbert space to states in the [...] Read more.
Since Euclidean global AdS2 space represented as a strip has two boundaries, the state-operator correspondence in the dual CFT1 reduces to the standard map from the operators acting on a single copy of the Hilbert space to states in the tensor product of two copies of the Hilbert space. Using this picture we argue that the corresponding states in the dual string theory living on AdS2 × K are described by the twisted version of the Hartle–Hawking states, the twists being generated by a large unitary group of symmetries that this string theory must possess. This formalism makes natural the dual interpretation of the black hole entropy—as the logarithm of the degeneracy of ground states of the quantum mechanics describing the low energy dynamics of the black hole, and also as an entanglement entropy between the two copies of the same quantum theory living on the two boundaries of global AdS2 separated by the event horizon. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics)
Open AccessArticle Partition Function of the Schwarzschild Black Hole
Entropy 2011, 13(7), 1324-1354; doi:10.3390/e13071324
Received: 23 May 2011 / Revised: 8 July 2011 / Accepted: 13 July 2011 / Published: 19 July 2011
Cited by 4 | PDF Full-text (221 KB) | HTML Full-text | XML Full-text
Abstract
We consider a microscopic model of a stretched horizon of the Schwarzschild black hole. In our model the stretched horizon consists of a finite number of discrete constituents. Assuming that the quantum states of the Schwarzschild black hole are encoded in the [...] Read more.
We consider a microscopic model of a stretched horizon of the Schwarzschild black hole. In our model the stretched horizon consists of a finite number of discrete constituents. Assuming that the quantum states of the Schwarzschild black hole are encoded in the quantum states of the constituents of its stretched horizon in a certain manner we obtain an explicit, analytic expression for the partition function of the hole. Our partition function predicts, among other things, the Hawking effect, and provides it with a microscopic, statistical interpretation. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics)
Open AccessArticle Effective Conformal Descriptions of Black Hole Entropy
Entropy 2011, 13(7), 1355-1379; doi:10.3390/e13071355
Received: 1 July 2011 / Revised: 12 July 2011 / Accepted: 19 July 2011 / Published: 20 July 2011
Cited by 21 | PDF Full-text (190 KB) | HTML Full-text | XML Full-text
Abstract
It is no longer considered surprising that black holes have temperatures and entropies. What remains surprising, though, is the universality of these thermodynamic properties: their exceptionally simple and general form, and the fact that they can be derived from many very different [...] Read more.
It is no longer considered surprising that black holes have temperatures and entropies. What remains surprising, though, is the universality of these thermodynamic properties: their exceptionally simple and general form, and the fact that they can be derived from many very different descriptions of the underlying microscopic degrees of freedom. I review the proposal that this universality arises from an approximate conformal symmetry, which permits an effective “conformal dual” description that is largely independent of the microscopic details. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics)
Open AccessArticle Diffuser and Nozzle Design Optimization by Entropy Generation Minimization
Entropy 2011, 13(7), 1380-1402; doi:10.3390/e13071380
Received: 10 June 2011 / Revised: 1 July 2011 / Accepted: 15 July 2011 / Published: 20 July 2011
Cited by 8 | PDF Full-text (998 KB) | HTML Full-text | XML Full-text
Abstract
Diffusers and nozzles within a flow system are optimized with respect to their wall shapes for a given change in cross sections. The optimization target is a low value of the head loss coefficient K, which can be linked to the overall [...] Read more.
Diffusers and nozzles within a flow system are optimized with respect to their wall shapes for a given change in cross sections. The optimization target is a low value of the head loss coefficient K, which can be linked to the overall entropy generation due to the conduit component. First, a polynomial shape of the wall with two degrees of freedom is assumed. As a second approach six equally spaced diameters in a diffuser are determined by a genetic algorithm such that the entropy generation and thus the head loss is minimized. It turns out that a visualization of cross section averaged entropy generation rates along the flow path should be used to identify sources of high entropy generation before and during the optimization. Thus it will be possible to decide whether a given parametric representation of a component’s shape only leads to a redistribution of losses or (in the most-favored case) to minimal values for K. Full article
(This article belongs to the Special Issue Entropy Generation Minimization)
Figures

Open AccessArticle Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions
Entropy 2011, 13(7), 1403-1424; doi:10.3390/e13071403
Received: 1 June 2011 / Revised: 12 July 2011 / Accepted: 18 July 2011 / Published: 22 July 2011
PDF Full-text (236 KB) | HTML Full-text | XML Full-text
Abstract
Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are [...] Read more.
Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features. Full article

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top