Next Issue
Volume 13, August
Previous Issue
Volume 13, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 13, Issue 7 (July 2011) – 9 articles , Pages 1212-1424

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:

Research

5560 KiB  
Article
Application of the EGM Method to a LED-Based Spotlight: A Constrained Pseudo-Optimization Design Process Based on the Analysis of the Local Entropy Generation Maps
by Giorgio Giangaspero and Enrico Sciubba
Entropy 2011, 13(7), 1212-1228; https://doi.org/10.3390/e13071212 - 27 Jun 2011
Cited by 9 | Viewed by 7096
Abstract
In this paper, the entropy generation minimization (EGM) method is applied to an industrial heat transfer problem: the forced convective cooling of a LED-based spotlight. The design specification calls for eighteen diodes arranged on a circular copper plate of 35 mm diameter. Every [...] Read more.
In this paper, the entropy generation minimization (EGM) method is applied to an industrial heat transfer problem: the forced convective cooling of a LED-based spotlight. The design specification calls for eighteen diodes arranged on a circular copper plate of 35 mm diameter. Every diode dissipates 3 W and the maximum allowedtemperature of the plate is 80 °C. The cooling relies on the forced convection driven by a jet of air impinging on the plate. An initial complex geometry of plate fins is presented and analyzed with a commercial CFD code that computes the entropy generation rate. A pseudo-optimization process is carried out via a successive series of design modifications based on a careful analysis of the entropy generation maps. One of the advantages of the EGM method is that the rationale behind each step of the design process can be justified on a physical basis. It is found that the best performance is attained when the fins are periodically spaced in the radial direction. Full article
(This article belongs to the Special Issue Entropy Generation Minimization)
Show Figures

Graphical abstract

4212 KiB  
Article
On Accuracy of PDF Divergence Estimators and Their Applicability to Representative Data Sampling
by Marcin Budka, Bogdan Gabrys and Katarzyna Musial
Entropy 2011, 13(7), 1229-1266; https://doi.org/10.3390/e13071229 - 08 Jul 2011
Cited by 24 | Viewed by 9642
Abstract
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error [...] Read more.
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This paper investigates the possibility of using various probability density function divergence measures for the purpose of representative data sampling. As it turned out, the first difficulty one needs to deal with is estimation of the divergence itself. In contrast to other publications on this subject, the experimental results provided in this study show that in many cases it is not possible unless samples consisting of thousands of instances are used. Exhaustive experiments on the divergence guided representative data sampling have been performed using 26 publicly available benchmark datasets and 70 PDF divergence estimators, and their results have been analysed and discussed. Full article
Show Figures

Figure 1

8015 KiB  
Article
Tsallis-Based Nonextensive Analysis of the Southern California Seismicity
by Luciano Telesca
Entropy 2011, 13(7), 1267-1280; https://doi.org/10.3390/e13071267 - 11 Jul 2011
Cited by 61 | Viewed by 7014
Abstract
Nonextensive statistics has been becoming a very useful tool to describe the complexity of dynamic systems. Recently, analysis of the magnitude distribution of earthquakes has been increasingly used in the context of nonextensivity. In the present paper, the nonextensive analysis of the southern [...] Read more.
Nonextensive statistics has been becoming a very useful tool to describe the complexity of dynamic systems. Recently, analysis of the magnitude distribution of earthquakes has been increasingly used in the context of nonextensivity. In the present paper, the nonextensive analysis of the southern California earthquake catalog was performed. The results show that the nonextensivity parameter q lies in the same range as obtained for other different seismic areas, thus suggesting a sort of universal character in the nonextensive interpretation of seismicity. Full article
(This article belongs to the Special Issue Tsallis Entropy)
Show Figures

Figure 1

1536 KiB  
Article
Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches
by Michael J. Markham
Entropy 2011, 13(7), 1281-1304; https://doi.org/10.3390/e13071281 - 18 Jul 2011
Cited by 6 | Viewed by 5579
Abstract
Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian) network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME) are free [...] Read more.
Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian) network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME) are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples. Full article
Show Figures

Figure 1

155 KiB  
Article
State Operator Correspondence and Entanglement in AdS2/CFT1
by Ashoke Sen
Entropy 2011, 13(7), 1305-1323; https://doi.org/10.3390/e13071305 - 19 Jul 2011
Cited by 88 | Viewed by 7444
Abstract
Since Euclidean global AdS2 space represented as a strip has two boundaries, the state-operator correspondence in the dual CFT1 reduces to the standard map from the operators acting on a single copy of the Hilbert space to states in the tensor [...] Read more.
Since Euclidean global AdS2 space represented as a strip has two boundaries, the state-operator correspondence in the dual CFT1 reduces to the standard map from the operators acting on a single copy of the Hilbert space to states in the tensor product of two copies of the Hilbert space. Using this picture we argue that the corresponding states in the dual string theory living on AdS2 × K are described by the twisted version of the Hartle–Hawking states, the twists being generated by a large unitary group of symmetries that this string theory must possess. This formalism makes natural the dual interpretation of the black hole entropy—as the logarithm of the degeneracy of ground states of the quantum mechanics describing the low energy dynamics of the black hole, and also as an entanglement entropy between the two copies of the same quantum theory living on the two boundaries of global AdS2 separated by the event horizon. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics)
Show Figures

Figure 1

221 KiB  
Article
Partition Function of the Schwarzschild Black Hole
by Jarmo Mäkelä
Entropy 2011, 13(7), 1324-1354; https://doi.org/10.3390/e13071324 - 19 Jul 2011
Cited by 12 | Viewed by 6456
Abstract
We consider a microscopic model of a stretched horizon of the Schwarzschild black hole. In our model the stretched horizon consists of a finite number of discrete constituents. Assuming that the quantum states of the Schwarzschild black hole are encoded in the quantum [...] Read more.
We consider a microscopic model of a stretched horizon of the Schwarzschild black hole. In our model the stretched horizon consists of a finite number of discrete constituents. Assuming that the quantum states of the Schwarzschild black hole are encoded in the quantum states of the constituents of its stretched horizon in a certain manner we obtain an explicit, analytic expression for the partition function of the hole. Our partition function predicts, among other things, the Hawking effect, and provides it with a microscopic, statistical interpretation. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics)
Show Figures

Figure 1

190 KiB  
Article
Effective Conformal Descriptions of Black Hole Entropy
by Steven Carlip
Entropy 2011, 13(7), 1355-1379; https://doi.org/10.3390/e13071355 - 20 Jul 2011
Cited by 47 | Viewed by 6439
Abstract
It is no longer considered surprising that black holes have temperatures and entropies. What remains surprising, though, is the universality of these thermodynamic properties: their exceptionally simple and general form, and the fact that they can be derived from many very different descriptions [...] Read more.
It is no longer considered surprising that black holes have temperatures and entropies. What remains surprising, though, is the universality of these thermodynamic properties: their exceptionally simple and general form, and the fact that they can be derived from many very different descriptions of the underlying microscopic degrees of freedom. I review the proposal that this universality arises from an approximate conformal symmetry, which permits an effective “conformal dual” description that is largely independent of the microscopic details. Full article
(This article belongs to the Special Issue Black Hole Thermodynamics)
998 KiB  
Article
Diffuser and Nozzle Design Optimization by Entropy Generation Minimization
by Bastian Schmandt and Heinz Herwig
Entropy 2011, 13(7), 1380-1402; https://doi.org/10.3390/e13071380 - 20 Jul 2011
Cited by 32 | Viewed by 12879
Abstract
Diffusers and nozzles within a flow system are optimized with respect to their wall shapes for a given change in cross sections. The optimization target is a low value of the head loss coefficient K, which can be linked to the overall entropy [...] Read more.
Diffusers and nozzles within a flow system are optimized with respect to their wall shapes for a given change in cross sections. The optimization target is a low value of the head loss coefficient K, which can be linked to the overall entropy generation due to the conduit component. First, a polynomial shape of the wall with two degrees of freedom is assumed. As a second approach six equally spaced diameters in a diffuser are determined by a genetic algorithm such that the entropy generation and thus the head loss is minimized. It turns out that a visualization of cross section averaged entropy generation rates along the flow path should be used to identify sources of high entropy generation before and during the optimization. Thus it will be possible to decide whether a given parametric representation of a component’s shape only leads to a redistribution of losses or (in the most-favored case) to minimal values for K. Full article
(This article belongs to the Special Issue Entropy Generation Minimization)
Show Figures

Graphical abstract

236 KiB  
Article
Joint Markov Blankets in Feature Sets Extracted from Wavelet Packet Decompositions
by Gert Van Dijck and Marc M. Van Hulle
Entropy 2011, 13(7), 1403-1424; https://doi.org/10.3390/e13071403 - 22 Jul 2011
Cited by 26 | Viewed by 7391
Abstract
Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived [...] Read more.
Since two decades, wavelet packet decompositions have been shown effective as a generic approach to feature extraction from time series and images for the prediction of a target variable. Redundancies exist between the wavelet coefficients and between the energy features that are derived from the wavelet coefficients. We assess these redundancies in wavelet packet decompositions by means of the Markov blanket filtering theory. We introduce the concept of joint Markov blankets. It is shown that joint Markov blankets are a natural extension of Markov blankets, which are defined for single features, to a set of features. We show that these joint Markov blankets exist in feature sets consisting of the wavelet coefficients. Furthermore, we prove that wavelet energy features from the highest frequency resolution level form a joint Markov blanket for all other wavelet energy features. The joint Markov blanket theory indicates that one can expect an increase of classification accuracy with the increase of the frequency resolution level of the energy features. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop