Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 18, Issue 10 (October 2016)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-37
Export citation of selected articles as:
Open AccessArticle A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation
Entropy 2016, 18(10), 380; https://doi.org/10.3390/e18100380
Received: 27 August 2016 / Revised: 15 October 2016 / Accepted: 20 October 2016 / Published: 24 October 2016
Cited by 14 | Viewed by 1584 | PDF Full-text (3004 KB) | HTML Full-text | XML Full-text
Abstract
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify
[...] Read more.
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify the basic cost function, which is denoted as the CIM-based LMMN (CIM-LMMN) algorithm. The proposed CIM-LMMN algorithm is derived in detail within the kernel framework. The updating equation of CIM-LMMN can provide a zero attractor to attract the non-dominant channel coefficients to zeros, and it also gives a tradeoff between the sparsity and the estimation misalignment. Moreover, the channel estimation behavior is investigated over a broadband sparse multi-path wireless channel, and the simulation results are compared with the least mean square/fourth (LMS/F), least mean square (LMS), least mean fourth (LMF) and the recently-developed sparse channel estimation algorithms. The channel estimation performance obtained from the designated sparse channel estimation demonstrates that the CIM-LMMN algorithm outperforms the recently-developed sparse LMMN algorithms and the relevant sparse channel estimation algorithms. From the results, we can see that our CIM-LMMN algorithm is robust and is superior to these mentioned algorithms in terms of both the convergence speed rate and the channel estimation misalignment for estimating a sparse channel. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle A Novel Sequence-Based Feature for the Identification of DNA-Binding Sites in Proteins Using Jensen–Shannon Divergence
Entropy 2016, 18(10), 379; https://doi.org/10.3390/e18100379
Received: 30 July 2016 / Revised: 19 October 2016 / Accepted: 20 October 2016 / Published: 24 October 2016
Cited by 2 | Viewed by 1692 | PDF Full-text (1170 KB) | HTML Full-text | XML Full-text
Abstract
The knowledge of protein-DNA interactions is essential to fully understand the molecular activities of life. Many research groups have developed various tools which are either structure- or sequence-based approaches to predict the DNA-binding residues in proteins. The structure-based methods usually achieve good results,
[...] Read more.
The knowledge of protein-DNA interactions is essential to fully understand the molecular activities of life. Many research groups have developed various tools which are either structure- or sequence-based approaches to predict the DNA-binding residues in proteins. The structure-based methods usually achieve good results, but require the knowledge of the 3D structure of protein; while sequence-based methods can be applied to high-throughput of proteins, but require good features. In this study, we present a new information theoretic feature derived from Jensen–Shannon Divergence (JSD) between amino acid distribution of a site and the background distribution of non-binding sites. Our new feature indicates the difference of a certain site from a non-binding site, thus it is informative for detecting binding sites in proteins. We conduct the study with a five-fold cross validation of 263 proteins utilizing the Random Forest classifier. We evaluate the functionality of our new features by combining them with other popular existing features such as position-specific scoring matrix (PSSM), orthogonal binary vector (OBV), and secondary structure (SS). We notice that by adding our features, we can significantly boost the performance of Random Forest classifier, with a clear increment of sensitivity and Matthews correlation coefficient (MCC). Full article
(This article belongs to the Special Issue Entropy on Biosignals and Intelligent Systems)
Figures

Figure 1

Open AccessArticle Second Law Analysis of Nanofluid Flow within a Circular Minichannel Considering Nanoparticle Migration
Entropy 2016, 18(10), 378; https://doi.org/10.3390/e18100378
Received: 19 August 2016 / Revised: 14 October 2016 / Accepted: 18 October 2016 / Published: 21 October 2016
Cited by 1 | Viewed by 1215 | PDF Full-text (7083 KB) | HTML Full-text | XML Full-text
Abstract
In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are carried
[...] Read more.
In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are carried out considering the particle migration effects. Due to particle migration, the nanoparticles incorporate non-uniform distribution at the cross-section of the pipe, such that the concentration is larger at central areas. The concentration non-uniformity increases by augmenting the mean concentration, particle size, and Reynolds number. The rates of entropy generation are evaluated both locally and globally (integrated). The obtained results show that particle migration changes the thermal and frictional entropy generation rates significantly, particularly at high Reynolds numbers, large concentrations, and coarser particles. Hence, this phenomenon should be considered in examinations related to energy in the field of nanofluids. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Figures

Figure 1

Open AccessArticle Isothermal Oxidation of Aluminized Coatings on High-Entropy Alloys
Entropy 2016, 18(10), 376; https://doi.org/10.3390/e18100376
Received: 31 July 2016 / Revised: 7 October 2016 / Accepted: 14 October 2016 / Published: 20 October 2016
Viewed by 1369 | PDF Full-text (17446 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The isothermal oxidation resistance of Al0.2Co1.5CrFeNi1.5Ti0.3 high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy is
[...] Read more.
The isothermal oxidation resistance of Al0.2Co1.5CrFeNi1.5Ti0.3 high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy is insufficient for severe circumstances only due to chromium oxide that is 10 μm after 1173 K for 360 h. Thus, the aluminized high-entropy alloys (HEAs) are further prepared by the industrial packing cementation process at 1273 K and 1323 K. The aluminizing coating is 50 μm at 1273 K after 5 h. The coating growth is controlled by the diffusion of aluminum. The interdiffusion zone reveals two regions that are the Ti-, Co-, Ni-rich area and the Fe-, Cr-rich area. The oxidation resistance of aluminizing HEA improves outstandingly, and sustains at 1173 K and 1273 K for 441 h without any spallation. The alumina at the surface and the stable interface contribute to the performance of this Al0.2Co1.5CrFeNi1.5Ti0.3 alloy. Full article
(This article belongs to the Special Issue High-Entropy Alloys and High-Entropy-Related Materials)
Figures

Figure 1

Open AccessArticle Non-Asymptotic Confidence Sets for Circular Means
Entropy 2016, 18(10), 375; https://doi.org/10.3390/e18100375
Received: 15 July 2016 / Revised: 10 October 2016 / Accepted: 13 October 2016 / Published: 20 October 2016
Cited by 1 | Viewed by 1196 | PDF Full-text (345 KB) | HTML Full-text | XML Full-text
Abstract
The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that
[...] Read more.
The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics) Printed Edition available
Figures

Figure 1

Open AccessArticle On the Virtual Cell Transmission in Ultra Dense Networks
Entropy 2016, 18(10), 374; https://doi.org/10.3390/e18100374
Received: 25 June 2016 / Revised: 30 September 2016 / Accepted: 14 October 2016 / Published: 20 October 2016
Cited by 1 | Viewed by 1520 | PDF Full-text (4763 KB) | HTML Full-text | XML Full-text
Abstract
Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have one
[...] Read more.
Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have one or more dedicated serving base station antennas, introducing the user-centric virtual cell paradigm. However, due to irregular deployment of a large amount of base station antennas, the interference environment becomes rather complex, thus introducing severe interferences among different virtual cells. This paper focuses on the downlink transmission scheme in UDN where a large number of users and base station antennas is uniformly spread over a certain area. An interference graph is first created based on the large-scale fadings to give a potential description of the interference relationship among the virtual cells. Then, base station antennas and users in the virtual cells within the same maximally-connected component are grouped together and merged into one new virtual cell cluster, where users are jointly served via zero-forcing (ZF) beamforming. A multi-virtual-cell minimum mean square error precoding scheme is further proposed to mitigate the inter-cluster interference. Additionally, the interference alignment framework is proposed based on the low complexity virtual cell merging to eliminate the strong interference between different virtual cells. Simulation results show that the proposed interference graph-based virtual cell merging approach can attain the average user spectral efficiency performance of the grouping scheme based on virtual cell overlapping with a smaller virtual cell size and reduced signal processing complexity. Besides, the proposed user-centric transmission scheme greatly outperforms the BS-centric transmission scheme (maximum ratio transmission (MRT)) in terms of both the average user spectral efficiency and edge user spectral efficiency. What is more, interference alignment based on the low complexity virtual cell merging can achieve much better performance than ZF and MRT precoding in terms of average user spectral efficiency. Full article
Figures

Figure 1

Open AccessCorrection Correction: Jacobsen, C.S., et al. Continuous Variable Quantum Key Distribution with a Noisy Laser. Entropy 2015, 17, 4654–4663
Entropy 2016, 18(10), 373; https://doi.org/10.3390/e18100373
Received: 8 October 2016 / Accepted: 14 October 2016 / Published: 20 October 2016
Viewed by 821 | PDF Full-text (1337 KB) | HTML Full-text | XML Full-text Figures

Figure 1

Open AccessArticle Point Information Gain and Multidimensional Data Analysis
Entropy 2016, 18(10), 372; https://doi.org/10.3390/e18100372
Received: 1 August 2016 / Revised: 17 September 2016 / Accepted: 14 October 2016 / Published: 19 October 2016
Cited by 5 | Viewed by 1519 | PDF Full-text (4329 KB) | HTML Full-text | XML Full-text
Abstract
We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these
[...] Read more.
We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle Study on the Stability and Entropy Complexity of an Energy-Saving and Emission-Reduction Model with Two Delays
Entropy 2016, 18(10), 371; https://doi.org/10.3390/e18100371
Received: 2 August 2016 / Revised: 13 October 2016 / Accepted: 13 October 2016 / Published: 19 October 2016
Cited by 2 | Viewed by 1071 | PDF Full-text (7021 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability and
[...] Read more.
In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability and the existence of a Hopf bifurcation at the equilibrium point of the system. By employing System Complexity Theory, we also analyze the impact of delays and the feedback control on stability and entropy of the system are analyzed from two aspects: single delay and double delays. In numerical simulation section, we test the theoretical analysis by using means bifurcation diagram, the largest Lyapunov exponent diagrams, attractor, time-domain plot, Poincare section plot, power spectrum, entropy diagram, 3-D surface chart and 4-D graph, the simulation results demonstrating that the inappropriate changes of delays and the feedback control will result in instability and fluctuation of carbon emissions. Finally, the bifurcation control is achieved by using the method of variable feedback control. Hence, we conclude that the greater the value of the control parameter, the better the effect of the bifurcation control. The results will provide for the development of energy-saving and emission-reduction policies. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessFeature PaperArticle From Tools in Symplectic and Poisson Geometry to J.-M. Souriau’s Theories of Statistical Mechanics and Thermodynamics
Entropy 2016, 18(10), 370; https://doi.org/10.3390/e18100370
Received: 28 July 2016 / Revised: 30 September 2016 / Accepted: 5 October 2016 / Published: 19 October 2016
Cited by 4 | Viewed by 1363 | PDF Full-text (450 KB) | HTML Full-text | XML Full-text
Abstract
I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation of
[...] Read more.
I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation of the Tulczyjew’s isomorphisms (which explain some aspects of the relations between these formalisms), I explain the concept of manifold of motions of a mechanical system and its use, due to J.-M. Souriau, in statistical mechanics and thermodynamics. The generalization of the notion of thermodynamic equilibrium in which the one-dimensional group of time translations is replaced by a multi-dimensional, maybe non-commutative Lie group, is fully discussed and examples of applications in physics are given. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics) Printed Edition available
Open AccessArticle Chemical Reactions Using a Non-Equilibrium Wigner Function Approach
Entropy 2016, 18(10), 369; https://doi.org/10.3390/e18100369
Received: 13 July 2016 / Revised: 14 September 2016 / Accepted: 13 October 2016 / Published: 19 October 2016
Viewed by 1045 | PDF Full-text (415 KB) | HTML Full-text | XML Full-text
Abstract
A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature T>0. Under the sole action of
[...] Read more.
A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature T > 0 . Under the sole action of the attraction potential, the two particles can either be bound or unbound to each other. While at T = 0 , there is no transition between both states, such a transition is possible when T > 0 (due to the heat bath) and plays a key role as k B T approaches the magnitude of the attractive potential. We focus on a quantum regime, typical of chemical reactions, such that: (a) the thermal wavelength is shorter than the range of the attractive potential (lower limit on T) and (b) ( 3 / 2 ) k B T does not exceed the magnitude of the attractive potential (upper limit on T). In this regime, we extend several methods previously applied to analyze the time duration of DNA thermal denaturation. The two-particle system is then described by a non-equilibrium Wigner function. Under Assumptions (a) and (b), and for sufficiently long times, defined by a characteristic time scale D that is subsequently estimated, the general dissipationless non-equilibrium equation for the Wigner function is approximated by a Smoluchowski-like equation displaying dissipation and quantum effects. A comparison with the standard chemical kinetic equations is made. The time τ required for the two particles to transition from the bound state to unbound configurations is studied by means of the mean first passage time formalism. An approximate formula for τ, in terms of D and exhibiting the Arrhenius exponential factor, is obtained. Recombination processes are also briefly studied within our framework and compared with previous well-known methods. Full article
Open AccessArticle A Hydrodynamic Model for Silicon Nanowires Based on the Maximum Entropy Principle
Entropy 2016, 18(10), 368; https://doi.org/10.3390/e18100368
Received: 14 July 2016 / Revised: 6 September 2016 / Accepted: 30 September 2016 / Published: 19 October 2016
Cited by 6 | Viewed by 971 | PDF Full-text (566 KB) | HTML Full-text | XML Full-text
Abstract
Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to the
[...] Read more.
Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to the transport in the free motion direction. For devices with the characteristic length of a few tens of nanometers, the transport of the electrons along the axis of the wire can be considered semiclassical, and it can be dealt with by the multi-sub-band Boltzmann transport equations (MBTE). By taking the moments of the MBTE, a hydrodynamic model has been formulated, where explicit closure relations for the fluxes and production terms (i.e., the moments on the collisional operator) are obtained by means of the maximum entropy principle of extended thermodynamics, including the scattering of electrons with phonons, impurities and surface roughness scattering. Numerical results are shown for a SiNW transistor. Full article
(This article belongs to the Special Issue Maximum Entropy Principle and Semiconductors)
Figures

Figure 1

Open AccessArticle Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory
Entropy 2016, 18(10), 367; https://doi.org/10.3390/e18100367
Received: 27 July 2016 / Revised: 4 October 2016 / Accepted: 12 October 2016 / Published: 18 October 2016
Cited by 2 | Viewed by 1321 | PDF Full-text (2100 KB) | HTML Full-text | XML Full-text
Abstract
Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key to
[...] Read more.
Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Figures

Figure 1

Open AccessArticle Intelligent Security IT System for Detecting Intruders Based on Received Signal Strength Indicators
Entropy 2016, 18(10), 366; https://doi.org/10.3390/e18100366
Received: 10 September 2016 / Revised: 10 October 2016 / Accepted: 10 October 2016 / Published: 16 October 2016
Cited by 2 | Viewed by 1384 | PDF Full-text (4742 KB) | HTML Full-text | XML Full-text
Abstract
Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still depend
[...] Read more.
Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still depend on tags held by building entrants. Since tags can be obtained by intruders, an approach to counter the disadvantages of tags is required. For example, it is possible to track the movement of tags in intelligent buildings in order to detect intruders. Therefore, each tag owner can be judged by analyzing the movements of their tags. This paper proposes a security approach based on the received signal strength indicators (RSSIs) of beacon-based tags to detect intruders. The normal RSSI patterns of moving entrants are obtained and analyzed. Intruders can be detected when abnormal RSSIs are measured in comparison to normal RSSI patterns. In the experiments, one normal and one abnormal scenario are defined for collecting the RSSIs of a Bluetooth-based beacon in order to validate the proposed method. When the RSSIs of both scenarios are compared to pre-collected RSSIs, the RSSIs of the abnormal scenario are about 61% more different compared to the RSSIs of the normal scenario. Therefore, intruders in buildings can be detected by considering RSSI differences. Full article
Figures

Figure 1

Open AccessArticle Boltzmann Sampling by Degenerate Optical Parametric Oscillator Network for Structure-Based Virtual Screening
Entropy 2016, 18(10), 365; https://doi.org/10.3390/e18100365
Received: 3 September 2016 / Revised: 10 October 2016 / Accepted: 11 October 2016 / Published: 13 October 2016
Cited by 4 | Viewed by 1989 | PDF Full-text (984 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds with
[...] Read more.
A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds with the query ligand are designed in consideration with all possible combinations of atomic species. This task is, however, computationally hard since such combinatorial optimization problems belong to the non-deterministic nonpolynomial-time hard (NP-hard) class. In this paper, we propose the structure-based lead generation and optimization procedures by a degenerate optical parametric oscillator (DOPO) network. Results of numerical simulation demonstrate that the DOPO network efficiently identifies a set of appropriate ligand molecules according to the Boltzmann sampling law. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora
Entropy 2016, 18(10), 364; https://doi.org/10.3390/e18100364
Received: 9 September 2016 / Revised: 30 September 2016 / Accepted: 9 October 2016 / Published: 12 October 2016
Cited by 6 | Viewed by 1729 | PDF Full-text (833 KB) | HTML Full-text | XML Full-text
Abstract
One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951,
[...] Read more.
One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese), to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle The Shell Collapsar—A Possible Alternative to Black Holes
Entropy 2016, 18(10), 363; https://doi.org/10.3390/e18100363
Received: 7 July 2016 / Revised: 1 October 2016 / Accepted: 2 October 2016 / Published: 12 October 2016
Viewed by 1069 | PDF Full-text (330 KB) | HTML Full-text | XML Full-text
Abstract
This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards such
[...] Read more.
This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards such a description was indicated in the classic Oppenheimer-Snyder (OS) 1939 analysis of a dust star. The title of that article implied support for a black-hole solution, but the present article shows that the final OS density distribution accords with gravastar and other shell models. The parallel Oppenheimer-Volkoff (OV) study of 1939 used the equation of state for a neutron gas, but could consider only stationary solutions of the field equations. Recently we found that the OV equation of state permits solutions with minimal rather than maximal central density, and here we find a similar topology for the OS dust collapsar; a uniform dust-ball which starts with large radius, and correspondingly small density, and collapses to a shell at the gravitational radius with density decreasing monotonically towards the centre. Though no longer considered central in black-hole theory, the OS dust model gave the first exact, time-dependent solution of the field equations. Regarded as a limiting case of OV, it indicates the possibility of neutron stars of unlimited mass with a similar shell topology. Progress in observational astronomy will distinguish this class of collapsars from black holes. Full article
Figures

Figure 1

Open AccessArticle Measures of Difference and Significance in the Era of Computer Simulations, Meta-Analysis, and Big Data
Entropy 2016, 18(10), 361; https://doi.org/10.3390/e18100361
Received: 28 May 2016 / Revised: 18 September 2016 / Accepted: 30 September 2016 / Published: 9 October 2016
Cited by 5 | Viewed by 1259 | PDF Full-text (840 KB) | HTML Full-text | XML Full-text
Abstract
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with
[...] Read more.
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with the square root of the sample size, which provides a stimulus to probe large samples. In simulation models, the situation is entirely different. When probability distribution functions for model features are specified, the probability distribution function of the model output can be approached using numerical techniques, such as bootstrapping or Monte Carlo sampling. Given the computational power of most PCs today, the sample size can be increased almost without bounds. The result is that standard errors of parameters are vanishingly small, and that almost all significance tests will lead to a rejected null hypothesis. Clearly, another approach to statistical significance is needed. This paper analyzes the situation and connects the discussion to other domains in which the null hypothesis significance test (NHST) paradigm is challenged. In particular, the notions of effect size and Cohen’s d provide promising alternatives for the establishment of a new indicator of statistical significance. This indicator attempts to cover significance (precision) and effect size (relevance) in one measure. Although in the end more fundamental changes are called for, our approach has the attractiveness of requiring only a minimal change to the practice of statistics. The analysis is not only relevant for artificial samples, but also for present-day huge samples, associated with the availability of big data. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Figures

Graphical abstract

Open AccessArticle Metric for Estimating Congruity between Quantum Images
Entropy 2016, 18(10), 360; https://doi.org/10.3390/e18100360
Received: 20 July 2016 / Revised: 29 September 2016 / Accepted: 29 September 2016 / Published: 9 October 2016
Cited by 6 | Viewed by 1399 | PDF Full-text (3548 KB) | HTML Full-text | XML Full-text
Abstract
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited
[...] Read more.
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP). As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Tolerance Redistributing of the Reassembly Dimensional Chain on Measure of Uncertainty
Entropy 2016, 18(10), 348; https://doi.org/10.3390/e18100348
Received: 7 July 2016 / Revised: 24 August 2016 / Accepted: 19 September 2016 / Published: 9 October 2016
Cited by 7 | Viewed by 1025 | PDF Full-text (1090 KB) | HTML Full-text | XML Full-text
Abstract
How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly) dimensional
[...] Read more.
How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly) dimensional chain as the research object. An entropy model to measure the uncertainty of assembly dimensional is built, and we quantify the degree of the uncertainty gap between reassembly and assembly. Then, in order to make sure the uncertainty of reassembly is not lower than that of assembly, the tolerance redistribution optimization model of the reassembly dimensional chain is proposed which is based on the tolerance of the grading allocation method. Finally, this paper takes the remanufactured gearbox assembly dimension chain as an example. The redistribution optimization model saves 19.11% of the cost with the assembly precision of remanufactured products. It provides new technical and theoretical support to expand the utilization rate of remanufactured parts and improve reassembly precision. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessFeature PaperArticle Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties
Entropy 2016, 18(10), 359; https://doi.org/10.3390/e18100359
Received: 19 August 2016 / Revised: 18 September 2016 / Accepted: 29 September 2016 / Published: 8 October 2016
Cited by 11 | Viewed by 1672 | PDF Full-text (2043 KB) | HTML Full-text | XML Full-text
Abstract
We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the von
[...] Read more.
We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the von Neumann entanglement entropy with the Shannon information entropy and discuss their relevance for the analysis of the degree of complexity of the eigenstates, the behavior of the system at different time scales and the conditions for thermalization. A main advantage of full random matrices is that they enable the derivation of analytical expressions that agree extremely well with the numerics and provide bounds for realistic many-body quantum systems. Full article
(This article belongs to the Special Issue Quantum Information 2016)
Figures

Figure 1

Open AccessArticle Ordering Quantiles through Confidence Statements
Entropy 2016, 18(10), 357; https://doi.org/10.3390/e18100357
Received: 8 June 2016 / Revised: 7 September 2016 / Accepted: 26 September 2016 / Published: 8 October 2016
Viewed by 1109 | PDF Full-text (1230 KB) | HTML Full-text | XML Full-text
Abstract
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A
[...] Read more.
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed, and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Figures

Figure 1

Open AccessArticle Entropy Cross-Efficiency Model for Decision Making Units with Interval Data
Entropy 2016, 18(10), 358; https://doi.org/10.3390/e18100358
Received: 10 August 2016 / Revised: 27 September 2016 / Accepted: 28 September 2016 / Published: 1 October 2016
Cited by 2 | Viewed by 1280 | PDF Full-text (425 KB) | HTML Full-text | XML Full-text
Abstract
The cross-efficiency method, as a Data Envelopment Analysis (DEA) extension, calculates the cross efficiency of each decision making unit (DMU) using the weights of all decision making units (DMUs). The major advantage of the cross-efficiency method is that it can provide a complete
[...] Read more.
The cross-efficiency method, as a Data Envelopment Analysis (DEA) extension, calculates the cross efficiency of each decision making unit (DMU) using the weights of all decision making units (DMUs). The major advantage of the cross-efficiency method is that it can provide a complete ranking for all DMUs. In addition, the cross-efficiency method could eliminate unrealistic weight results. However, the existing cross-efficiency methods only evaluate the relative efficiencies of a set of DMUs with exact values of inputs and outputs. If the input or output data of DMUs are imprecise, such as the interval data, the existing methods fail to assess the efficiencies of these DMUs. To address this issue, we propose the introduction of Shannon entropy into the cross-efficiency method. In the proposed model, intervals of all cross-efficiency values are firstly obtained by the interval cross-efficiency method. Then, a distance entropy model is proposed to obtain the weights of interval efficiency. Finally, all alternatives are ranked by their relative Euclidean distance from the positive solution. Full article
Figures

Figure 1

Open AccessArticle Entropy-Based Application Layer DDoS Attack Detection Using Artificial Neural Networks
Entropy 2016, 18(10), 350; https://doi.org/10.3390/e18100350
Received: 2 August 2016 / Revised: 15 September 2016 / Accepted: 19 September 2016 / Published: 1 October 2016
Cited by 9 | Viewed by 1962 | PDF Full-text (2218 KB) | HTML Full-text | XML Full-text
Abstract
Distributed denial-of-service (DDoS) attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS), mostly caused by application-layer
[...] Read more.
Distributed denial-of-service (DDoS) attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS), mostly caused by application-layer DDoS attacks. Within this context, the objective of the paper is to detect a DDoS attack using a multilayer perceptron (MLP) classification algorithm with genetic algorithm (GA) as learning algorithm. In this work, we analyzed the standard EPA-HTTP (environmental protection agency-hypertext transfer protocol) dataset and selected the parameters that will be used as input to the classifier model for differentiating the attack from normal profile. The parameters selected are the HTTP GET request count, entropy, and variance for every connection. The proposed model can provide a better accuracy of 98.31%, sensitivity of 0.9962, and specificity of 0.0561 when compared to other traditional classification models. Full article
Figures

Figure 1

Open AccessArticle Analysis of Entropy Generation in Mixed Convective Peristaltic Flow of Nanofluid
Entropy 2016, 18(10), 355; https://doi.org/10.3390/e18100355
Received: 28 June 2016 / Revised: 18 September 2016 / Accepted: 23 September 2016 / Published: 30 September 2016
Cited by 7 | Viewed by 1521 | PDF Full-text (4063 KB) | HTML Full-text | XML Full-text
Abstract
This article examines entropy generation in the peristaltic transport of nanofluid in a channel with flexible walls. Single walled carbon nanotubes (SWCNT) and multiple walled carbon nanotubes (MWCNT) with water as base fluid are utilized in this study. Mixed convection is also considered
[...] Read more.
This article examines entropy generation in the peristaltic transport of nanofluid in a channel with flexible walls. Single walled carbon nanotubes (SWCNT) and multiple walled carbon nanotubes (MWCNT) with water as base fluid are utilized in this study. Mixed convection is also considered in the present analysis. Viscous dissipation effect is present. Moreover, slip conditions are encountered for both velocity and temperature at the boundaries. Analysis is prepared in the presence of long wavelength and small Reynolds number assumptions. Two phase model for nanofluids are employed. Nonlinear system of equations for small Grashof number is solved. Velocity and temperature are examined for different parameters via graphs. Streamlines are also constructed to analyze the trapping. Results show that axial velocity and temperature of the nanofluid decrease when we enhance the nanoparticle volume fraction. Moreover, the wall elastance parameter shows increase in axial velocity and temperature, whereas decrease in both quantities is noticed for damping coefficient. Decrease is notified in Entropy generation and Bejan number for increasing values of nanoparticle volume fraction. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Figure 1

Open AccessArticle Exergetic Analysis of a Novel Solar Cooling System for Combined Cycle Power Plants
Entropy 2016, 18(10), 356; https://doi.org/10.3390/e18100356
Received: 7 July 2016 / Revised: 16 September 2016 / Accepted: 24 September 2016 / Published: 29 September 2016
Cited by 4 | Viewed by 1635 | PDF Full-text (11845 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a detailed exergetic analysis of a novel high-temperature Solar Assisted Combined Cycle (SACC) power plant. The system includes a solar field consisting of innovative high-temperature flat plate evacuated solar thermal collectors, a double stage LiBr-H2O absorption chiller, pumps,
[...] Read more.
This paper presents a detailed exergetic analysis of a novel high-temperature Solar Assisted Combined Cycle (SACC) power plant. The system includes a solar field consisting of innovative high-temperature flat plate evacuated solar thermal collectors, a double stage LiBr-H2O absorption chiller, pumps, heat exchangers, storage tanks, mixers, diverters, controllers and a simple single-pressure Combined Cycle (CC) power plant. Here, a high temperature solar cooling system is coupled with a conventional combined cycle, in order to pre-cool gas turbine inlet air in order to enhance system efficiency and electrical capacity. In this paper, the system is analyzed from an exergetic point of view, on the basis of an energy-economic model presented in a recent work, where the obtained main results show that SACC exhibits a higher electrical production and efficiency with respect to the conventional CC. The system performance is evaluated by a dynamic simulation, where detailed simulation models are implemented for all the components included in the system. In addition, for all the components and for the system as whole, energy and exergy balances are implemented in order to calculate the magnitude of the irreversibilities within the system. In fact, exergy analysis is used in order to assess: exergy destructions and exergetic efficiencies. Such parameters are used in order to evaluate the magnitude of the irreversibilities in the system and to identify the sources of such irreversibilities. Exergetic efficiencies and exergy destructions are dynamically calculated for the 1-year operation of the system. Similarly, exergetic results are also integrated on weekly and yearly bases in order to evaluate the corresponding irreversibilities. The results showed that the components of the Joule cycle (combustor, turbine and compressor) are the major sources of irreversibilities. System overall exergetic efficiency was around 48%. Average weekly solar collector exergetic efficiency ranged from 6.5% to 14.5%, significantly increasing during the summer season. Conversely, absorption chiller exergy efficiency varies from 7.7% to 20.2%, being higher during the winter season. Combustor exergy efficiency is stably close to 68%, whereas the exergy efficiencies of the remaining components are higher than 80%. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Figures

Figure 1

Open AccessArticle A Langevin Canonical Approach to the Study of Quantum Stochastic Resonance in Chiral Molecules
Entropy 2016, 18(10), 354; https://doi.org/10.3390/e18100354
Received: 22 July 2016 / Revised: 21 September 2016 / Accepted: 26 September 2016 / Published: 29 September 2016
Viewed by 1053 | PDF Full-text (378 KB) | HTML Full-text | XML Full-text
Abstract
A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification
[...] Read more.
A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification of the response to an external periodic signal at a certain value of the noise strength, being a cooperative effect of friction, noise, and periodic driving occurring in a bistable system. Furthermore, from this stochastic dynamics within the Markovian regime and Ohmic friction, the competing process between tunneling and the parity violating energy difference present in this type of chiral systems plays a fundamental role. This mechanism is finally proposed to observe the so-far elusive parity-violating energy difference in chiral molecules. Full article
(This article belongs to the Section Statistical Physics)
Figures

Figure 1

Open AccessReview Generalized Thermodynamic Optimization for Iron and Steel Production Processes: Theoretical Exploration and Application Cases
Entropy 2016, 18(10), 353; https://doi.org/10.3390/e18100353
Received: 29 June 2016 / Revised: 7 September 2016 / Accepted: 21 September 2016 / Published: 29 September 2016
Cited by 55 | Viewed by 2852 | PDF Full-text (13147 KB) | HTML Full-text | XML Full-text
Abstract
Combining modern thermodynamics theory branches, including finite time thermodynamics or entropy generation minimization, constructal theory and entransy theory, with metallurgical process engineering, this paper provides a new exploration on generalized thermodynamic optimization theory for iron and steel production processes. The theoretical core is
[...] Read more.
Combining modern thermodynamics theory branches, including finite time thermodynamics or entropy generation minimization, constructal theory and entransy theory, with metallurgical process engineering, this paper provides a new exploration on generalized thermodynamic optimization theory for iron and steel production processes. The theoretical core is to thermodynamically optimize performances of elemental packages, working procedure modules, functional subsystems, and whole process of iron and steel production processes with real finite-resource and/or finite-size constraints with various irreversibilities toward saving energy, decreasing consumption, reducing emission and increasing yield, and to achieve the comprehensive coordination among the material flow, energy flow and environment of the hierarchical process systems. A series of application cases of the theory are reviewed. It can provide a new angle of view for the iron and steel production processes from thermodynamics, and can also provide some guidelines for other process industries. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Figure 1

Open AccessArticle Propositions for Confidence Interval in Systematic Sampling on Real Line
Entropy 2016, 18(10), 352; https://doi.org/10.3390/e18100352
Received: 14 June 2016 / Revised: 14 September 2016 / Accepted: 21 September 2016 / Published: 28 September 2016
Cited by 1 | Viewed by 1174 | PDF Full-text (346 KB) | HTML Full-text | XML Full-text
Abstract
Systematic sampling is used as a method to get the quantitative results from tissues and radiological images. Systematic sampling on a real line (R) is a very attractive method within which biomedical imaging is consulted by practitioners. For the systematic sampling
[...] Read more.
Systematic sampling is used as a method to get the quantitative results from tissues and radiological images. Systematic sampling on a real line ( R ) is a very attractive method within which biomedical imaging is consulted by practitioners. For the systematic sampling on R , the measurement function ( M F ) occurs by slicing the three-dimensional object equidistant systematically. The currently-used covariogram model in variance approximation is tested for the different measurement functions in a class to see the performance on the variance estimation of systematically-sampled R . An exact calculation method is proposed to calculate the constant λ ( q , N ) of the confidence interval in the systematic sampling. The exact value of constant λ ( q , N ) is examined for the different measurement functions, as well. As a result, it is observed from the simulation that the proposed M F should be used to check the performances of the variance approximation and the constant λ ( q , N ) . Synthetic data can support the results of real data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle Influence of the Aqueous Environment on Protein Structure—A Plausible Hypothesis Concerning the Mechanism of Amyloidogenesis
Entropy 2016, 18(10), 351; https://doi.org/10.3390/e18100351
Received: 28 July 2016 / Revised: 13 September 2016 / Accepted: 19 September 2016 / Published: 28 September 2016
Cited by 8 | Viewed by 1581 | PDF Full-text (8854 KB) | HTML Full-text | XML Full-text
Abstract
The aqueous environment is a pervasive factor which, in many ways, determines the protein folding process and consequently the activity of proteins. Proteins are unable to perform their function unless immersed in water (membrane proteins excluded from this statement). Tertiary conformational stabilization is
[...] Read more.
The aqueous environment is a pervasive factor which, in many ways, determines the protein folding process and consequently the activity of proteins. Proteins are unable to perform their function unless immersed in water (membrane proteins excluded from this statement). Tertiary conformational stabilization is dependent on the presence of internal force fields (nonbonding interactions between atoms), as well as an external force field generated by water. The hitherto the unknown structuralization of water as the aqueous environment may be elucidated by analyzing its effects on protein structure and function. Our study is based on the fuzzy oil drop model—a mechanism which describes the formation of a hydrophobic core and attempts to explain the emergence of amyloid-like fibrils. A set of proteins which vary with respect to their fuzzy oil drop status (including titin, transthyretin and a prion protein) have been selected for in-depth analysis to suggest the plausible mechanism of amyloidogenesis. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Back to Top