Display options:
Normal
Show Abstracts
Compact

Select/unselect all

Displaying article 1-32

Editorial
p. 4060-4087
Received: 21 October 2013; in revised form: 9 June 2014 / Accepted: 1 July 2014 / Published: 18 July 2014

Show/Hide Abstract
| PDF Full-text (859 KB)
Abstract: This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) in biological signaling systems depends on valid representations correctly construed: a series of proofs are presented and generalized to all meaningful sign systems. The proofs show why infants must (as empirical evidence shows they do) proceed through a strict sequence of formal steps in acquiring any language. Classical and contemporary conceptions of entropy and information are deployed showing why factors that interfere with coherence in biological signaling systems are necessary and sufficient causes of disorders, diseases, and mortality. Known sources of such formal degeneracy in living organisms (here termed, biosemiotic entropy ) include: (a) toxicants, (b) pathogens; (c) excessive exposures to radiant energy and/or sufficiently powerful electromagnetic fields; (d) traumatic injuries; and (e) interactions between the foregoing factors. Just as Jaynes proved that irreversible changes invariably increase entropy, the theory of true narrative representations (TNR theory) demonstrates that factors disrupting the well-formedness (coherence) of valid representations, all else being held equal, must increase biosemiotic entropy —the kind impacting biosignaling systems.

Research
p. 3552-3572
Received: 28 April 2014; in revised form: 19 June 2014 / Accepted: 24 June 2014 / Published: 26 June 2014

Show/Hide Abstract
| PDF Full-text (286 KB)
Abstract: We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.

p. 3573-3589
Received: 14 April 2014; in revised form: 26 June 2014 / Accepted: 26 June 2014 / Published: 30 June 2014

Show/Hide Abstract
| PDF Full-text (1343 KB)
Abstract: Rotating stalls are generally the first instability met in turbomachinery, before surges. This 3D phenomenon is characterized by one or more stalled flow cells which rotate at a fraction of the impeller speed. The goal of the present work is to shed some light on the entropy generation in a centrifugal fan under rotating stall conditions. A numerical simulation of entropy generation is carried out with the ANSYS Fluent software which solves the Navier-Stokes equations and user defined function (UDF). The entropy generation characteristics in the centrifugal fan for five typical conditions are presented and discussed, involving the design condition, conditions on occurrence and development of stall inception, the rotating stall conditions with two throttle coefficients. The results show that the entropy generation increases after the occurrence of stall inception. The high entropy generation areas move along the circumferential and axial directions, and finally merge into one stall cell. The entropy generation rate during circumferential propagation of the stall cell is also discussed, showing that the entropy generation history is similar to sine curves in impeller and volute, and the volute tongue has a great influence on entropy generation in the centrifugal fan.

p. 3590-3604
Received: 5 May 2014; in revised form: 24 June 2014 / Accepted: 25 June 2014 / Published: 30 June 2014

Show/Hide Abstract
| PDF Full-text (235 KB)
Abstract: Yang and Qiu proposed an expected utility-entropy (EU-E) measure of risk, which reflects an individual’s intuitive attitude toward risk. Luce et al. have derived the numerical representations under behavioral axioms about preference orderings among gambles and their joint receipt, which further demonstrates the reasonability of the EU-E decision model as a normative one. In the paper, combining normalized expected utility and entropy together, we improve the EU-E measure of risk and decision model, and then propose the normalized EU-E measure of risk and decision model. The normalized EU-E measure of risk has some normative properties under certain conditions. Moreover, the normalized EU-E decision model can be a proper descriptive model to some extent. Using this model, two cases of common ratio effect and common consequence effect, which are the examples of certainty effects, can be explained in an intuitive way.

p. 3605-3634
Received: 4 May 2014; in revised form: 11 June 2014 / Accepted: 19 June 2014 / Published: 30 June 2014

Show/Hide Abstract
| PDF Full-text (530 KB)
Abstract: This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D , some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH /d t is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.

p. 3635-3654
Received: 19 March 2014; in revised form: 6 June 2014 / Accepted: 23 June 2014 / Published: 30 June 2014

Show/Hide Abstract
| PDF Full-text (864 KB)
Abstract: In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile) Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks.

p. 3655-3669
Received: 13 March 2014; in revised form: 9 June 2014 / Accepted: 26 June 2014 / Published: 1 July 2014

Show/Hide Abstract
| PDF Full-text (1744 KB)
Abstract: In this paper, based on a doubly generalized Type II censored sample, the maximum likelihood estimators (MLEs), the approximate MLE and the Bayes estimator for the entropy of the Rayleigh distribution are derived. We compare the entropy estimators’ root mean squared error (RMSE), bias and Kullback–Leibler divergence values. The simulation procedure is repeated 10,000 times for the sample size n = 10, 20, 40 and 100 and various doubly generalized Type II hybrid censoring schemes. Finally, a real data set has been analyzed for illustrative purposes.

p. 3670-3688
Received: 25 March 2014; in revised form: 6 June 2014 / Accepted: 20 June 2014 / Published: 1 July 2014

Show/Hide Abstract
| PDF Full-text (397 KB)
Abstract: The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e.g., human language processing and image recognition), the information bound J could be excessively larger than I (J ≫ I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramér–Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance.

p. 3689-3709
Received: 1 May 2014; in revised form: 26 June 2014 / Accepted: 27 June 2014 / Published: 3 July 2014

Show/Hide Abstract
| PDF Full-text (4166 KB)
Abstract: Blood-oxygen-level-dependent (BOLD) imaging is the most important noninvasive tool to map human brain function. It relies on local blood-flow changes controlled by neurovascular coupling effects, usually in response to some cognitive or perceptual task. In this contribution we ask if the spatiotemporal dynamics of the BOLD signal can be modeled by a conservation law. In analogy to the description of physical laws, which often can be derived from some underlying conservation law, identification of conservation laws in the brain could lead to new models for the functional organization of the brain. Our model is independent of the nature of the conservation law, but we discuss possible hints and motivations for conservation laws. For example, globally limited blood supply and local competition between brain regions for blood might restrict the large scale BOLD signal in certain ways that could be observable. One proposed selective pressure for the evolution of such conservation laws is the closed volume of the skull limiting the expansion of brain tissue by increases in blood volume. These ideas are demonstrated on a mental motor imagery fMRI experiment, in which functional brain activation was mapped in a group of volunteers imagining themselves swimming. In order to search for local conservation laws during this complex cognitive process, we derived maps of quantities resulting from spatial interaction of the BOLD amplitudes. Specifically, we mapped fluxes and sources of the BOLD signal, terms that would appear in a description by a continuity equation. Whereas we cannot present final answers with the particular analysis of this particular experiment, some results seem to be non-trivial. For example, we found that during task the group BOLD flux covered more widespread regions than identified by conventional BOLD mapping and was always increasing during task. It is our hope that these results motivate more work towards the search for conservation laws in neuroimaging experiments or at least towards imaging procedures based on spatial interactions of signals. The payoff could be new models for the dynamics of the healthy brain or more sensitive clinical imaging approaches, respectively.

p. 3710-3731
Received: 1 March 2014; in revised form: 6 June 2014 / Accepted: 12 June 2014 / Published: 3 July 2014

Show/Hide Abstract
| PDF Full-text (2660 KB)
Abstract: Ecosystem entropy production is predicted to increase along ecological succession and approach a state of maximum entropy production, but few studies have bridged the gap between theory and data. Here, we explore radiative entropy production in terrestrial ecosystems using measurements from 64 Free/Fair-Use sites in the FLUXNET database, including a successional chronosequence in the Duke Forest in the southeastern United States. Ecosystem radiative entropy production increased then decreased as succession progressed in the Duke Forest ecosystems, and did not exceed 95% of the calculated empirical maximum entropy production in the FLUXNET study sites. Forest vegetation, especially evergreen needleleaf forests characterized by low shortwave albedo and close coupling to the atmosphere, had a significantly higher ratio of radiative entropy production to the empirical maximum entropy production than did croplands and grasslands. Our results demonstrate that ecosystems approach, but do not reach, maximum entropy production and that the relationship between succession and entropy production depends on vegetation characteristics. Future studies should investigate how natural disturbances and anthropogenic management—especially the tendency to shift vegetation to an earlier successional state—alter energy flux and entropy production at the surface-atmosphere interface.

p. 3732-3753
Received: 17 May 2014; in revised form: 19 June 2014 / Accepted: 30 June 2014 / Published: 3 July 2014

Show/Hide Abstract
| PDF Full-text (352 KB)
Abstract: We consider the concept of generalized Kolmogorov–Sinai entropy, where instead of the Shannon entropy function, we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function, we show that this isomorphism invariant is linearly dependent on the Kolmogorov–Sinai entropy.

p. 3769-3792
Received: 14 May 2014; in revised form: 30 June 2014 / Accepted: 1 July 2014 / Published: 7 July 2014

Show/Hide Abstract
| PDF Full-text (266 KB)
Abstract: Fundamental conservation equations for mass, momentum and energy of chemical species can be combined with thermodynamic relations to obtain secondary forms, such as conservation equations for phases, an internal energy balance and a mechanical energy balance. In fact, the forms of secondary equations are infinite and depend on the criteria used in determining which species-based equations to employ and how to combine them. If one uses these secondary forms in developing an entropy inequality to be used in formulating closure relations, care must be employed to ensure that the appropriate equations are used, or problematic results can develop for multispecies systems. We show here that the use of the fundamental forms minimizes the chance of an erroneous formulation in terms of secondary forms and also provides guidance as to which secondary forms should be used if one uses them as a starting point.

p. 3793-3807
Received: 18 April 2014; in revised form: 10 June 2014 / Accepted: 1 July 2014 / Published: 9 July 2014

Show/Hide Abstract
| PDF Full-text (1480 KB)
Abstract: The evolution of a microcanonical statistical ensemble of states of isolated systems from order to disorder as determined by increasing entropy, is compared to an alternative evolution that is determined by mixing character. The fact that the partitions of an integer N are in one-to-one correspondence with macrostates for N distinguishable objects is noted. Orders for integer partitions are given, including the original order by Young and the Boltzmann order by entropy. Mixing character (represented by Young diagrams) is seen to be a partially ordered quality rather than a quantity (See Ruch, 1975). The majorization partial order is reviewed as is its Hasse diagram representation known as the Young Diagram Lattice (YDL).Two lattices that show allowed transitions between macrostates are obtained from the YDL: we term these the mixing lattice and the diversity lattice. We study the dynamics (time evolution) on the two lattices, namely the sequence of steps on the lattices (i.e., the path or trajectory) that leads from low entropy, less mixed states to high entropy, highly mixed states. These paths are sequences of macrostates with monotonically increasing entropy. The distributions of path lengths on the two lattices are obtained via Monte Carlo methods, and surprisingly both distributions appear Gaussian. However, the width of the path length distribution for diversity is the square root of the mixing case, suggesting a qualitative difference in their temporal evolution. Another surprising result is that some macrostates occur in many paths while others do not. The evolution at low entropy and at high entropy is quite simple, but at intermediate entropies, the number of possible evolutionary paths is extremely large (due to the extensive branching of the lattices). A quantitative complexity measure associated with incomparability of macrostates in the mixing partial order is proposed, complementing Kolmogorov complexity and Shannon entropy.

p. 3832-3847
Received: 14 April 2014; in revised form: 12 June 2014 / Accepted: 7 July 2014 / Published: 14 July 2014

Show/Hide Abstract
| PDF Full-text (743 KB)
Abstract: The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB) method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how to apply VB to the regime-switching log-normal model and how it provides a computationally fast solution to quantify the uncertainty in the model specification. The results show that the method can recover exactly the model structure, gives the reasonable point estimates and is very computationally efficient. The potential problems of the method in quantifying the parameter uncertainty are discussed.

p. 3848-3865
Received: 5 April 2014; in revised form: 16 June 2014 / Accepted: 30 June 2014 / Published: 14 July 2014

Show/Hide Abstract
| PDF Full-text (1252 KB)
Abstract: We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency and also fully exploring the geometric context of all visual words in images. First of all, we obtain hierarchical salient regions of a query image based on the maximum entropy principle and label visual features with salient tags. The tags added to the feature descriptors are used to compute the saliency matching score, and the scores are regarded as the weight information in the geometry verification step. Second we define a spatial pattern as a triangle composed of three matched features and evaluate the similarity between every two spatial patterns. Finally, we sum all spatial matching scores with weights to generate the final ranking list. Experiment results prove that Hierarchical Geometry Verification based on Maximum Entropy Saliency can not only improve retrieval accuracy, but also reduce the time consumption of the full retrieval.

p. 3866-3877
Received: 8 March 2014; in revised form: 22 June 2014 / Accepted: 3 July 2014 / Published: 14 July 2014

Show/Hide Abstract
| PDF Full-text (1130 KB)
Abstract: We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely, we arrange for them to be diagnosed multiple times by different doctors, and we collect the all diagnosis results to derive the final judgement. Our inference model is based on the statistical consistency of the diagnosis data. To evaluate the proposed model, we conduct experiments on both the synthetic and real data; the results show that it outperforms the benchmarks.

p. 3878-3888
Received: 15 May 2014; in revised form: 25 June 2014 / Accepted: 11 July 2014 / Published: 15 July 2014

Show/Hide Abstract
| PDF Full-text (215 KB)
Abstract: The von Neumann entropy S($\hat{D}$ ) generates in the space of quantum density matrices $\hat{D}$ the Riemannian metric ds^{2} = −d^{2} S($\hat{D}$ ) , which is physically founded and which characterises the amount of quantum information lost by mixing $\hat{D}$ and $\hat{D}$ + d$\hat{D}$ . A rich geometric structure is thereby implemented in quantum mechanics. It includes a canonical mapping between the spaces of states and of observables, which involves the Legendre transform of S($\hat{D}$ ) . The Kubo scalar product is recovered within the space of observables. Applications are given to equilibrium and non equilibrium quantum statistical mechanics. There the formalism is specialised to the relevant space of observables and to the associated reduced states issued from the maximum entropy criterion, which result from the exact states through an orthogonal projection. Von Neumann’s entropy specialises into a relevant entropy. Comparison is made with other metrics. The Riemannian properties of the metric ds^{2} = −d^{2} S($\hat{D}$ ) are derived. The curvature arises from the non-Abelian nature of quantum mechanics; its general expression and its explicit form for q-bits are given, as well as geodesics.

p. 3889-3902
Received: 4 March 2014; in revised form: 23 June 2014 / Accepted: 7 July 2014 / Published: 15 July 2014

Show/Hide Abstract
| PDF Full-text (339 KB)
Abstract: We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.

p. 3939-4003
Received: 6 May 2014; in revised form: 19 June 2014 / Accepted: 3 July 2014 / Published: 17 July 2014

Show/Hide Abstract
| PDF Full-text (1906 KB)
Abstract: Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a theoretical foundation for general anesthesia using the network properties of the brain. Finally, we present some key emergent properties from the fields of thermodynamics and electromagnetic field theory to qualitatively explain the underlying neuronal mechanisms of action for anesthesia and consciousness.

p. 4004-4014
Received: 4 May 2014; in revised form: 10 July 2014 / Accepted: 15 July 2014 / Published: 17 July 2014

Show/Hide Abstract
| PDF Full-text (709 KB)
Abstract: The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.

p. 4015-4031
Received: 17 April 2014; in revised form: 23 June 2014 / Accepted: 9 July 2014 / Published: 17 July 2014

Show/Hide Abstract
| PDF Full-text (2418 KB)
Abstract: The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if { p θ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G ($\stackrel{-}{\theta}$ , γ ) with hyperparameters $\stackrel{-}{\theta}$ ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(− d 2(θ, $\stackrel{-}{\theta}$ )/ 2γ ^{2} ), where d^{2} (θ, $\stackrel{-}{\theta}$ ) is the square of Rao’s Riemannian distance. The distributions G($\stackrel{-}{\theta}$ , γ) are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G($\stackrel{-}{\theta}$ , γ) is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G($\stackrel{-}{\theta}$ , γ) has a unique mode $\stackrel{-}{\theta}$ (precisely, $\stackrel{-}{\theta}$ is the unique Riemannian center of mass of G($\stackrel{-}{\theta}$ , γ), as shown in the paper), and its dispersion away from $\stackrel{-}{\theta}$ is given by γ. Therefore, one thinks of members of the class represented by G($\stackrel{-}{\theta}$ , γ) as being centered around $\stackrel{-}{\theta}$ and lying within a typical distance determined by γ. The paper defines rigorously the Gaussian distributions G($\stackrel{-}{\theta}$ , γ) and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G($\stackrel{-}{\theta}$ , γ) can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that this leads to an improvement in performance over the use of conjugate priors.

p. 4032-4043
Received: 19 May 2014; in revised form: 2 July 2014 / Accepted: 8 July 2014 / Published: 17 July 2014

Show/Hide Abstract
| PDF Full-text (1232 KB)
Abstract: Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e. , 20 min). This study aimed at validating the sensitivity of a novel method, short time MSE (sMSE) that utilized a substantially smaller sample size (i.e. , 600 consecutive points), in differentiating the complexity of PWV signals both in simulation and in human subjects that were divided into four groups: healthy young (Group 1; n = 24) and middle-aged (Group 2; n = 30) subjects without known cardiovascular disease and middle-aged individuals with well-controlled (Group 3; n = 18) and poorly-controlled (Group 4; n = 22) diabetes mellitus type 2. The results demonstrated that although conventional MSE could differentiate the subjects using 1000 consecutive PWV series points, sensitivity was lost using only 600 points. Simulation study revealed consistent results. By contrast, the novel sMSE method produced significant differences in entropy in both simulation and testing subjects. In conclusion, this study demonstrated that using a novel sMSE approach for PWV analysis, the time for data acquisition can be substantially reduced to that required for 600 cardiac cycles (~10 min) with remarkable preservation of sensitivity in differentiating among healthy, aged, and diabetic populations.

p. 4044-4059
Received: 29 May 2014; in revised form: 27 June 2014 / Accepted: 8 July 2014 / Published: 17 July 2014

Show/Hide Abstract
| PDF Full-text (431 KB)
Abstract: The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one into the other in a suitable limit as the result of a renormalization. The properties of the new quantities would slightly differ from that of the usual entropies in a few other respects.

p. 4088-4100
Received: 30 April 2014; in revised form: 30 June 2014 / Accepted: 14 July 2014 / Published: 18 July 2014

Show/Hide Abstract
| PDF Full-text (262 KB)
Abstract: One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δ_{n} °-1 and that of a transformation V_{n} -1 of its interior. This transformation is the natural parameter space associated with the family of multinomial distributions. The space V_{n} -1 is partitioned into cones that are used to find one dimensional families with desirable properties for modeling and inference. These properties include the availability of uniformly most powerful tests and estimators that exhibit optimal properties in terms of variability and unbiasedness.

p. 4101-4120
Received: 2 June 2014; in revised form: 14 July 2014 / Accepted: 16 July 2014 / Published: 21 July 2014

Show/Hide Abstract
| PDF Full-text (1528 KB)
Abstract: The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H_{19} O_{10} ], in different oxidation states (anionic and neutral species). The findings do not support anionic species as the origin of the spectroscopic fingerprints observed experimentally for structured water. In this context, hexameric anions can just be seen as a source of hydrated hydroxyl anions and cationic species. The results for the neutral dimer are, however, fully consistent with the experimental evidence related to both, absorption and fluorescence spectra. The neutral π-stacked dimer [H_{38} O_{20} ] can be assigned as the main responsible for the recorded absorption and fluorescence spectra with computed band maxima at 271 nm (4.58 eV) and 441 nm (2.81 eV), respectively. The important role of triplet excited states is finally discussed. The most intense vertical triplet⇨ triplet transition is predicted to be at 318 nm (3.90 eV).

p. 4121-4131
Received: 21 May 2014; in revised form: 3 July 2014 / Accepted: 7 July 2014 / Published: 21 July 2014

Show/Hide Abstract
| PDF Full-text (443 KB)
Abstract: The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components of the woofer were analyzed with input powers ranging from 5 W to 60 W. The numerical results of the voice coil showed good agreement within ±1% of the data by Odenbach (2003). The temperatures of the voice coil and its units for the woofer without the bobbin were 6.1% and 5.0% on average, respectively; lower than those of the woofer with the bobbin. However, at an input power of 30 W for the voice coil, the temperatures of the main components of the woofer without the bobbin were 40.0% higher on average than those of the woofer obtained by Lee et al. (2013).

p. 4132-4167
Received: 28 March 2014; in revised form: 24 June 2014 / Accepted: 14 July 2014 / Published: 23 July 2014

Show/Hide Abstract
| PDF Full-text (1740 KB)
Abstract: We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides, we consider the novel measures of complexity with respect to the system decompositionability, by introducing the geometric product of Kullback–Leibler (KL-) divergence. The novel complexity measures satisfy the boundary condition of vanishing at the limit of completely random and ordered state, and also with the existence of independent subsystem of any size. Such complexity measures based on the geometric means are relevant to the heterogeneity of dependencies between subsystems, and the amount of information propagation shared entirely in the system.

p. 4168-4184
Received: 27 May 2014; in revised form: 24 June 2014 / Accepted: 7 July 2014 / Published: 23 July 2014

Show/Hide Abstract
| PDF Full-text (258 KB)
Abstract: The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs to a known set Ρ of distributions. However, since universal descriptions are not matched exactly to the underlying distribution, the number of bits they use on average is higher, and the excess over the entropy used is the redundancy. In this paper, we study the redundancy incurred by the universal description of strings of positive integers (Z+), the strings being generated independently and identically distributed (i.i.d. ) according an unknown distribution over Z+ in a known collection P. We first show that if describing a single symbol incurs finite redundancy, then P is tight, but that the converse does not always hold. If a single symbol can be described with finite worst-case regret (a more stringent formulation than redundancy above), then it is known that describing length n i.i.d. strings only incurs vanishing (to zero) redundancy per symbol as n increases. On the contrary, we show it is possible that the description of a single symbol from an unknown distribution of P incurs finite redundancy, yet the description of length n i.i.d. strings incurs a constant (> 0) redundancy per symbol encoded. We then show a sufficient condition on single-letter marginals, such that length n i.i.d. samples will incur vanishing redundancy per symbol encoded.

Review
p. 3754-3768
Received: 28 April 2014; in revised form: 28 May 2014 / Accepted: 27 June 2014 / Published: 7 July 2014

Show/Hide Abstract
| PDF Full-text (744 KB)
Abstract: Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

p. 3813-3831
Received: 11 April 2014; in revised form: 27 June 2014 / Accepted: 1 July 2014 / Published: 11 July 2014

Show/Hide Abstract
| PDF Full-text (1484 KB)
Abstract: Magnetic materials with strong spin-lattice coupling are a powerful set of candidates for multifunctional applications because of their multiferroic, magnetocaloric (MCE), magnetostrictive and magnetoresistive effects. In these materials there is a strong competition between two states (where a state comprises an atomic and an associated magnetic structure) that leads to the occurrence of phase transitions under subtle variations of external parameters, such as temperature, magnetic field and hydrostatic pressure. In this review a general method combining detailed magnetic measurements/analysis and first principles calculations with the purpose of estimating the phase transition temperature is presented with the help of two examples (Gd_{5} Si_{2} Ge_{2} and Tb_{5} Si_{2} Ge_{2} ). It is demonstrated that such method is an important tool for a deeper understanding of the (de)coupled nature of each phase transition in the materials belonging to the R_{5} (Si,Ge)_{4} family and most possibly can be applied to other systems. The exotic Griffiths-like phase in the framework of the R_{5} (Si_{x} Ge_{1-x} )_{4} compounds is reviewed and its generalization as a requisite for strong phase competitions systems that present large magneto-responsive properties is proposed.

p. 3903-3938
Received: 11 February 2014; in revised form: 4 June 2014 / Accepted: 10 June 2014 / Published: 16 July 2014

Show/Hide Abstract
| PDF Full-text (1057 KB)
Abstract: The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications of the Second Law of Thermodynamics to social issues—economics, ecology, sustainability, and energy policy. The concept called Available Energy which goes back to mid-nineteenth century work of Kelvin, Rankine, Maxwell and Gibbs, is relevant to all of the papers. Various names have been applied to the concept when interactions between the system of interest and an environment are involved. Today, the name exergy is generally accepted. The scope of the papers being reviewed is wide and they complement one another well.

Other
p. 3808-3812
Received: 16 April 2014; in revised form: 3 July 2014 / Accepted: 7 July 2014 / Published: 10 July 2014

Show/Hide Abstract
| PDF Full-text (697 KB)
Abstract: In an earlier paper in Entropy [1] we hypothesized that the entropy generation rate is the driving force for boundary layer transition from laminar to turbulent flow. Subsequently, with our colleagues we have examined the prediction of entropy generation during such transitions [2,3]. We found that reasonable predictions for engineering purposes could be obtained for flows with negligible streamwise pressure gradients by adapting the linear combination model of Emmons [4]. A question then arises—will the Emmons approach be useful for boundary layer transition with significant streamwise pressure gradients as by Nolan and Zaki [5]. In our implementation the intermittency is calculated by comparison to skin friction correlations for laminar and turbulent boundary layers and is then applied with comparable correlations for the energy dissipation coefficient (i.e ., non-dimensional integral entropy generation rate). In the case of negligible pressure gradients the Blasius theory provides the necessary laminar correlations.

Select/unselect all

Displaying article 1-32

Export citation of selected articles as:
Plain Text
BibTeX
BibTeX (without abstracts)
Endnote
Endnote (without abstracts)
Tab-delimited
PubMed XML
DOAJ XML
AGRIS XML