Display options:
Normal
Show Abstracts
Compact
Select/unselect all
Displaying article 118
Research
p. 721752
Received: 8 November 2012 / Revised: 15 February 2013 / Accepted: 19 February 2013 / Published: 25 February 2013
Show/Hide Abstract
 Cited by 5  PDF Fulltext (637 KB)  HTML Fulltext  XML Fulltext
Abstract: The Minimum Mutual Information (MinMI) Principle provides the least committed, maximumjointentropy (ME) inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values) generated by constraining sets T _{cr} comprehended by m_{cr} linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy) are imposed by single morphisms of the original random variables. N asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing T _{cr} leads to an increasing MinMI, converging eventually to the total MI. Under N sized samples, the MinMI increment relative to two encapsulated sets T _{cr1} ⊂ T_{cr2} (with numbers of constraints mcr1<mcr2 ) is the testdifference δH = H _{max 1, N}  H _{max 2, N} ≥ 0 between the two respective estimated MEs. Asymptotically, δH follows a ChiSquared distribution ^{1} /_{2N} Χ 2 (m_{cr2} m_{cr1} ) whose upper quantiles determine if constraints in T _{cr2} /T _{cr1} explain significant extra MI. As an example, we have set marginals to being normally distributed (Gaussian) and have built a sequence of MI bounds, associated to successive nonlinear correlations due to joint nonGaussianity. Noting that in realworld situations available sample sizes can be rather low, the relationship between MinMI bias, probability density overfitting and outliers is put in evidence for undersampled data.
p. 753766
Received: 21 January 2013 / Revised: 19 February 2013 / Accepted: 21 February 2013 / Published: 25 February 2013
Show/Hide Abstract
 Cited by 7  PDF Fulltext (177 KB)  HTML Fulltext  XML Fulltext
Abstract: Analysis of gait dynamics in children may help understand the development of neuromuscular control and maturation of locomotor function. This paper applied the nonparametric Parzenwindow estimation method to establish the probability density function (PDF) models for the stride interval time series of 50 children (25 boys and 25 girls). Four statistical parameters, in terms of averaged stride interval (ASI), variation of stride interval (VSI), PDF skewness (SK), and PDF kurtosis (KU), were computed with the Parzenwindow PDFs to study the maturation of stride interval in children. By analyzing the results of the children in three age groups (aged 3–5 years, 6–8 years, and 10–14 years), we summarize the key findings of the present study as follows. (1) The gait cycle duration, in terms of ASI, increases until 14 years of age. On the other hand, the gait variability, in terms of VSI, decreases rapidly until 8 years of age, and then continues to decrease at a slower rate. (2) The SK values of both the histograms and Parzenwindow PDFs for all of the three age groups are positive, which indicates an imbalance in the stride interval distribution within an age group. However, such an imbalance would be meliorated when the children grow up. (3) The KU values of both the histograms and Parzenwindow PDFs decrease with the body growth in children, which suggests that the musculoskeletal growth enables the children to modulate a gait cadence with ease. (4) The SK and KU results also demonstrate the superiority of the Parzenwindow PDF estimation method to the Gaussian distribution modeling, for the study of gait maturation in children.
p. 767788
Received: 26 January 2013 / Revised: 13 February 2013 / Accepted: 19 February 2013 / Published: 25 February 2013
Show/Hide Abstract
 Cited by 4  PDF Fulltext (1163 KB)  HTML Fulltext  XML Fulltext
Abstract: A method is shown for computing transfer entropy over multiple time lags for coupled autoregressive processes using formulas for the differential entropy of multivariate Gaussian processes. Two examples are provided: (1) a firstorder filtered noise process whose state is measured with additive noise, and (2) two firstorder coupled processes each of which is driven by white process noise. We found that, for the first example, increasing the firstorder AR coefficient while keeping the correlation coefficient between filtered and measured process fixed, transfer entropy increased since the entropy of the measured process was itself increased. For the second example, the minimum correlation coefficient occurs when the process noise variances match. It was seen that matching of these variances results in minimum information flow, expressed as the sum of transfer entropies in both directions. Without a match, the transfer entropy is larger in the direction away from the process having the larger process noise. Fixing the process noise variances, transfer entropies in both directions increase with the coupling strength. Finally, we note that the method can be generally employed to compute other information theoretic quantities as well.
p. 926942
Received: 4 January 2013 / Revised: 8 February 2013 / Accepted: 20 February 2013 / Published: 27 February 2013
Show/Hide Abstract
 Cited by 6  PDF Fulltext (1336 KB)  HTML Fulltext  XML Fulltext
Abstract: Providing accurate load forecast to electric utility corporations is essential in order to reduce their operational costs and increase profits. Hence, training set selection is an important preprocessing step which has to be considered in practice in order to increase the accuracy of load forecasts. The usage of mutual information (MI) has been recently proposed in regression tasks, mostly for feature selection and for identifying the real instances from training sets that contains noise and outliers. This paper proposes a methodology for the training set selection in a least squares support vector machines (LSSVMs) load forecasting model. A new application of the concept of MI is presented for the selection of a training set based on MI computation between initial training set instances and testing set instances. Accordingly, several LSSVMs models have been trained, based on the proposed methodology, for hourly prediction of electric load for one day ahead. The results obtained from a realworld data set indicate that the proposed method increases the accuracy of load forecasting as well as reduces the size of the initial training set needed for model training.
p. 943959
Received: 2 December 2012 / Revised: 21 January 2013 / Accepted: 22 February 2013 / Published: 27 February 2013
Show/Hide Abstract
 Cited by 2  PDF Fulltext (284 KB)  HTML Fulltext  XML Fulltext
Abstract: Generally, the controller design should be performed to narrow the shape of the probability density function of the tracking error. A small information entropy value corresponds to a narrow distribution function, which means that the uncertainty of the related random variable is small. In this paper, information entropy is introduced in the field of control performance assessment (CPA). For the unknown time delay case, the minimum information entropy (MIE) benchmark is presented, and a MIEbased performance index is defined. For the known time delay case, a tight upper bound of MIE is derived and adopted as a performance benchmark to assess the stochastic control performance. Based on these, the control performance assessment procedures are developed for both the steady and the transient processes. Simulation tests and an industrial case study of a main steam pressure system of a 1,000MW power unit are utilized to verify the effectiveness of the proposed procedures.
p. 960971
Received: 12 December 2012 / Revised: 15 January 2013 / Accepted: 21 February 2013 / Published: 27 February 2013
Show/Hide Abstract
 Cited by 1  PDF Fulltext (243 KB)  HTML Fulltext  XML Fulltext
Abstract: We show that the thermodynamics of ideal gases may be derived solely from the Democritean concept of corpuscles moving in vacuum plus a principle of simplicity, namely that these laws are independent of the laws of motion, aside from the law of energy conservation. Only a single corpuscle in contact with a heat bath submitted to a z and tinvariant force is considered. Most of the end results are known but the method appears to be novel. The mathematics being elementary, the present paper should facilitate the understanding of the ideal gas law and of classical thermodynamics even though notusuallytaught concepts are being introduced.
p. 972987
Received: 15 January 2013 / Revised: 25 February 2013 / Accepted: 27 February 2013 / Published: 5 March 2013
Show/Hide Abstract
 Cited by 5  PDF Fulltext (245 KB)  HTML Fulltext  XML Fulltext
Abstract: Boundary line models for N_{2} O emissions from agricultural soils provide a means of estimating emissions within defined ranges. Boundary line models partition a twodimensional region of parameter space into subregions by means of thresholds based on relationships between N_{2} O emissions and explanatory variables, typically using soil data available from laboratory or field studies. Such models are intermediate in complexity between the use of IPCC emission factors and complex processbased models. Model calibration involves characterizing the extent to which observed data are correctly forecast. Writing the numerical results from graphical twothreshold boundary line models as 3×3 predictionrealization tables facilitates calculation of expected mutual information, a measure of the amount of information about the observations contained in the forecasts. Whereas mutual information characterizes the performance of a forecaster averaged over all forecast categories, specific information and relative entropy both characterize aspects of the amount of information contained in particular forecasts. We calculate and interpret these information quantities for experimental N_{2} O emissions data.
p. 988998
Received: 24 December 2012 / Revised: 18 February 2013 / Accepted: 1 March 2013 / Published: 6 March 2013
Show/Hide Abstract
 Cited by 6  PDF Fulltext (540 KB)  HTML Fulltext  XML Fulltext
Abstract: Velocity distribution in an open channel flow can be very useful to model many hydraulic phenomena. Among the others, several 1D models based on the concept of entropy are available in the literature, which allow estimating the velocity distribution by measuring velocities only in a few points. Nevertheless, since 1D models have often a limited practical use, a 2D entropy based model was recently developed. The model provides a reliable estimation of the velocity distribution for open channel flow with a rectangular cross section, if the maximum velocity and the average velocity are known. In this paper results from the proposed model were compared with measured velocities carried out from laboratory experiments. Calculated values were also compared with results inferred from a 2D model available in the literature, resulting in a greater ease of use and a more reliable estimate of the velocity profile.
p. 9991013
Received: 17 February 2013 / Revised: 4 March 2013 / Accepted: 5 March 2013 / Published: 7 March 2013
Show/Hide Abstract
 Cited by 5  PDF Fulltext (5973 KB)  HTML Fulltext  XML Fulltext
Abstract: In the modern world, the fine balance and delicate relationship between human society and the environment in which we exist has been affected by the phenomena of urbanisation and urban development. Today, various environmental factors give rise to horizontal dispersion, spread and growth of cities. One of the most important results of this is climatic change which is directly affected by the urban sprawl of every metropolis. The aim of this study is to identify the relationship between the various horizontally distributed components of Tehran city and changes in essential microclimate clusters, by means of the humidex index. Results showed that, when the humidex was calculated for each of the obtained clusters, it was evident that it had increased with time, in parallel with Shannon’s entropy, as a consequence of the average temperature and relative humidity of each cluster. At the same time, results have shown that both temperature and relative humidity of the study area are related with urban sprawl, urbanisation and development, as defined by Shannon’s entropy and, in consequence, with humidex. In consequence, this new concept must be considered in future research works to predict and control urban sprawl and microclimate conditions in cities.
p. 10141034
Received: 28 November 2012 / Revised: 5 February 2013 / Accepted: 6 March 2013 / Published: 11 March 2013
Show/Hide Abstract
 Cited by 6  PDF Fulltext (731 KB)  HTML Fulltext  XML Fulltext
Abstract: A solaraided coalfired power plant realizes the integration of a fossil fuel (coal or gas) and clean energy (solar). In this paper, a conventional 600 MW coalfired power plant and a 600 MW solaraided coalfired power plant have been taken as the study case to understand the merits of solaraided power generation (SAPG) technology. The plants in the case study have been analyzed by using the First and Second Laws of Thermodynamics principles. The solar irradiation and load ratio have been considered in the analysis. We conclude that if the solar irradiation was 925 W/m^{2} and load ratio of the SAPG plant was 100%, the exergy efficiency would be 44.54% and the energy efficiency of the plant (46.35%). It was found that in the SAPG plant the largest exergy loss was from the boiler, which accounted for about 76.74% of the total loss. When the load ratio of the unit remains at 100%, and the solar irradiation varies from 500 W/m^{2} to 1,100 W/m^{2} , the coal savings would be in the range of 8.6 g/kWh to 15.8 g/kWh. If the solar irradiation were kept at 925 W/m^{2} while the load ratio of the plant changed from 30% to 100%, the coal savings could be in the range of 11.99 g/kWh to 13.75 g/kWh.
p. 10351056
Received: 4 February 2013 / Revised: 23 February 2013 / Accepted: 27 February 2013 / Published: 11 March 2013
Show/Hide Abstract
 Cited by 7  PDF Fulltext (152 KB)  HTML Fulltext  XML Fulltext
Abstract: The many moments model for dense gases and macromolecular fluids is considered here, where the upper order moment is chosen in accordance to the suggestions of the nonrelativistic limit of the corresponding relativistic model. The solutions of the restrictions imposed by the entropy principle and that of Galilean relativity were, until now, obtained in the literature by using Taylor expansions around equilibrium and without proving convergence. Here, an exact solution without using expansions is found. The particular case with only 14 moments has already been treated in the literature in a completely different way. Here, it is proven that this particular closure is included in the presently more general one.
p. 10571068
Received: 22 January 2013 / Revised: 12 March 2013 / Accepted: 12 March 2013 / Published: 18 March 2013
Show/Hide Abstract
 PDF Fulltext (229 KB)  HTML Fulltext  XML Fulltext
Abstract: Using the new global embedding approach we investigate Unruh/Hawking temperature of the 5dimensional minimal gauged supergravity black hole with double rotating parameters in a general (1 + 1) spacetime. Our results verify that views of Banerjee and Majhi, and extend this approach to a higher dimension situation.
p. 10691084
Received: 4 February 2013 / Revised: 25 February 2013 / Accepted: 13 March 2013 / Published: 18 March 2013
Show/Hide Abstract
 Cited by 28  PDF Fulltext (579 KB)  HTML Fulltext  XML Fulltext
Abstract: Multiscale entropy (MSE) was recently developed to evaluate the complexity of time series over different time scales. Although the MSE algorithm has been successfully applied in a number of different fields, it encounters a problem in that the statistical reliability of the sample entropy (SampEn) of a coarsegrained series is reduced as a time scale factor is increased. Therefore, in this paper, the concept of a composite multiscale entropy (CMSE) is introduced to overcome this difficulty. Simulation results on both white noise and 1/f noise show that the CMSE provides higher entropy reliablity than the MSE approach for large time scale factors. On real data analysis, both the MSE and CMSE are applied to extract features from fault bearing vibration signals. Experimental results demonstrate that the proposed CMSEbased feature extractor provides higher separability than the MSEbased feature extractor.
p. 10851099
Received: 11 January 2013 / Revised: 18 February 2013 / Accepted: 12 March 2013 / Published: 18 March 2013
Show/Hide Abstract
 Cited by 1  PDF Fulltext (2378 KB)  HTML Fulltext  XML Fulltext

Supplementary Files
Abstract: The opening/closure of the catalytic loop 6 over the active site in apo triosephosphate isomerase (TIM) has been previously shown to be driven by the global motions of the enzyme, specifically the counterclockwise rotation of the subunits. In this work, the effect of the substrate dihydroxyacetone phosphate (DHAP) on TIM dynamics is assessed using two apo and two DHAPbound molecular dynamics (MD) trajectories (each 60 ns long). Multiple events of catalytic loop opening/closure take place during 60 ns runs for both apo TIM and its DHAPcomplex. However, counterclockwise rotation observed in apo TIM is suppressed and bendingtype motions are linked to loop dynamics in the presence of DHAP. Bound DHAP molecules also reduce the overall mobility of the enzyme and change the pattern of orientational crosscorrelations, mostly those within each subunit. The fluctuations of pseudodihedral angles of the loop 6 residues are enhanced towards the Cterminus, when DHAP is bound at the active site.
p. 11001117
Received: 16 February 2013 / Revised: 14 March 2013 / Accepted: 18 March 2013 / Published: 22 March 2013
Show/Hide Abstract
 Cited by 1  PDF Fulltext (276 KB)  HTML Fulltext  XML Fulltext
Abstract: In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME), a nonparametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS), for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AISbased simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.
p. 11181134
Received: 14 December 2012 / Revised: 28 February 2013 / Accepted: 18 March 2013 / Published: 22 March 2013
Show/Hide Abstract
 Cited by 8  PDF Fulltext (1228 KB)  HTML Fulltext  XML Fulltext

Supplementary Files
Abstract: In plasmas, Debye screening structures the possible correlations between particles. We identify a phase space minimum h _{*} in nonequilibrium space plasmas that connects the energy of particles in a Debye sphere to an equivalent wave frequency. In particular, while there is no a priori reason to expect a single value of h _{*} across plasmas, we find a very similar value of h_{*} _{ } ≈ (7.5 ± 2.4)×10^{−22 } J·s using four independent methods: (1) Ulysses solar wind measurements, (2) space plasmas that typically reside in stationary states out of thermal equilibrium and spanning a broad range of physical properties, (3) an entropic limit emerging from statistical mechanics, (4) waitingtime distributions of explosive events in space plasmas. Finding a quasiconstant value for the phase space minimum in a variety of different plasmas, similar to the classical Planck constant but 12 orders of magnitude larger may be revealing a new type of quantization in many plasmas and correlated systems more generally.
p. 11351151
Received: 17 January 2013 / Revised: 11 March 2013 / Accepted: 19 March 2013 / Published: 22 March 2013
Show/Hide Abstract
 Cited by 2  PDF Fulltext (250 KB)  HTML Fulltext  XML Fulltext
Abstract: Several models have been proposed to explain the dark energy that is causing universe expansion to accelerate. Here the acceleration predicted by the Holographic Dark Information Energy (HDIE) model is compared to the acceleration that would be produced by a cosmological constant. While identical to a cosmological constant at low redshifts, z < 1, the HDIE model results in smaller Hubble parameter values at higher redshifts, z > 1, reaching a maximum difference of 2.6 ± 0.5% around z ~ 1.7. The next generation of dark energy measurements, both those scheduled to be made in space (ESA’s Euclid and NASA’s WFIRST missions) and those to be made on the ground (BigBOSS, LSST and Dark Energy Survey), should be capable of determining whether such a difference exists or not. In addition a computer simulation thought experiment is used to show that the algorithmic entropy of the universe always increases because the extra states produced by the accelerating expansion compensate for the loss of entropy from star formation.
Review
p. 789925
Received: 21 December 2012 / Revised: 11 February 2013 / Accepted: 17 February 2013 / Published: 27 February 2013
Show/Hide Abstract
 Cited by 2  PDF Fulltext (592 KB)
Abstract: This paper is a review of our recent work on three notorious problems of nonrelativistic quantum mechanics: realist interpretation, quantum theory of classical properties, and the problem of quantum measurement. A considerable progress has been achieved, based on four distinct new ideas. First, objective properties are associated with states rather than with values of observables. Second, all classical properties are selected properties of certain high entropy quantum states of macroscopic systems. Third, registration of a quantum system is strongly disturbed by systems of the same type in the environment. Fourth, detectors must be distinguished from ancillas and the states of registered systems are partially dissipated and lost in the detectors. The paper has two aims: a clear explanation of all new results and a coherent and contradictionfree account of the whole quantum mechanics including all necessary changes of its current textbook version.
Select/unselect all
Displaying article 118
Export citation of selected articles as:
Plain Text
BibTeX
BibTeX (without abstracts)
Endnote
Endnote (without abstracts)
Tabdelimited
RIS