Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 5 (May 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The so-called quantum trajectories arising from the Bohm theory should not be called ``surreal'' as [...] Read more.
View options order results:
result details:
Displaying articles 1-84
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

Open AccessEditorial Editorial: Entropy in Landscape Ecology
Entropy 2018, 20(5), 314; https://doi.org/10.3390/e20050314
Received: 19 April 2018 / Revised: 20 April 2018 / Accepted: 21 April 2018 / Published: 25 April 2018
PDF Full-text (147 KB) | HTML Full-text | XML Full-text
Abstract
Entropy and the second law of thermodynamics are the central organizing principles of nature, but the ideas and implications of the second law are poorly developed in landscape ecology. The purpose of this Special Issue “Entropy in Landscape Ecology” in Entropy is to
[...] Read more.
Entropy and the second law of thermodynamics are the central organizing principles of nature, but the ideas and implications of the second law are poorly developed in landscape ecology. The purpose of this Special Issue “Entropy in Landscape Ecology” in Entropy is to bring together current research on applications of thermodynamics in landscape ecology, to consolidate current knowledge and identify key areas for future research. The special issue contains six articles, which cover a broad range of topics including relationships between entropy and evolution, connections between fractal geometry and entropy, new approaches to calculate configurational entropy of landscapes, example analyses of computing entropy of landscapes, and using entropy in the context of optimal landscape planning. Collectively these papers provide a broad range of contributions to the nascent field of ecological thermodynamics. Formalizing the connections between entropy and ecology are in a very early stage, and that this special issue contains papers that address several centrally important ideas, and provides seminal work that will be a foundation for the future development of ecological and evolutionary thermodynamics. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Open AccessEditorial Entropy in Nanofluids
Entropy 2018, 20(5), 339; https://doi.org/10.3390/e20050339
Received: 26 April 2018 / Revised: 1 May 2018 / Accepted: 1 May 2018 / Published: 3 May 2018
PDF Full-text (342 KB) | HTML Full-text | XML Full-text
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessEditorial Molecular Dynamics vs. Stochastic Processes: Are We Heading Anywhere?
Entropy 2018, 20(5), 348; https://doi.org/10.3390/e20050348
Received: 2 May 2018 / Revised: 2 May 2018 / Accepted: 2 May 2018 / Published: 7 May 2018
PDF Full-text (188 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)

Research

Jump to: Editorial, Review, Other

Open AccessArticle Studies on the Exergy Transfer Law for the Irreversible Process in the Waxy Crude Oil Pipeline Transportation
Entropy 2018, 20(5), 309; https://doi.org/10.3390/e20050309
Received: 7 April 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 24 April 2018
PDF Full-text (23524 KB) | HTML Full-text | XML Full-text
Abstract
With the increasing demand of oil products in China, the energy consumption of pipeline operation will continue to rise greatly, as well as the cost of oil transportation. In the field of practical engineering, saving energy, reducing energy consumption and adapting to the
[...] Read more.
With the increasing demand of oil products in China, the energy consumption of pipeline operation will continue to rise greatly, as well as the cost of oil transportation. In the field of practical engineering, saving energy, reducing energy consumption and adapting to the international oil situation are the development trends and represent difficult problems. Based on the basic principle of non-equilibrium thermodynamics, this paper derives the field equilibrium equations of non-equilibrium thermodynamic process for pipeline transportation. To seek the bilinear form of “force” and “flow” in the non-equilibrium thermodynamics of entropy generation rate, the oil pipeline exergy balance equation and the exergy transfer pipeline dynamic equation of the irreversibility were established. The exergy balance equation was applied to energy balance evaluation system, which makes the system more perfect. The exergy flow transfer law of the waxy oil pipeline were explored deeply from the directions of dynamic exergy, pressure exergy, thermal exergy and diffusion exergy. Taking an oil pipeline as an example, the influence factors of exergy transfer coefficient and exergy flow density were analyzed separately. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Exergy Analyses of Onion Drying by Convection: Influence of Dryer Parameters on Performance
Entropy 2018, 20(5), 310; https://doi.org/10.3390/e20050310
Received: 18 March 2018 / Revised: 14 April 2018 / Accepted: 16 April 2018 / Published: 25 April 2018
PDF Full-text (2503 KB) | HTML Full-text | XML Full-text
Abstract
This research work is concerned in the exergy analysis of the continuous-convection drying of onion. The influence of temperature and air velocity was studied in terms of exergy parameters. The energy and exergy balances were carried out taking into account the onion drying
[...] Read more.
This research work is concerned in the exergy analysis of the continuous-convection drying of onion. The influence of temperature and air velocity was studied in terms of exergy parameters. The energy and exergy balances were carried out taking into account the onion drying chamber. Its behavior was analyzed based on exergy efficiency, exergy loss rate, exergetic improvement potential rate, and sustainability index. The exergy loss rates increase with the temperature and air velocity augmentation. Exergy loss rate is influenced by the drying air temperatures and velocities because the overall heat transfer coefficient varies with these operation conditions. On the other hand, the exergy efficiency increases with the air velocity augmentation. This behavior is due to the energy utilization was improved because the most amount of supplied energy was utilized for the moisture evaporation. However, the exergy efficiency decreases with the temperature augmentation due to the free moisture being lower, then, the moisture begins diffusing from the internal structure to the surface. The exergetic improvement potential rate values show that the exergy efficiency of onion drying process can be ameliorated. The sustainability index of the drying chamber varied from 1.9 to 5.1. To reduce the process environmental impact, the parameters must be modified in order to ameliorate the exergy efficiency of the process. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Network Entropy for the Sequence Analysis of Functional Connectivity Graphs of the Brain
Entropy 2018, 20(5), 311; https://doi.org/10.3390/e20050311
Received: 1 March 2018 / Revised: 11 April 2018 / Accepted: 17 April 2018 / Published: 25 April 2018
PDF Full-text (18706 KB) | HTML Full-text | XML Full-text
Abstract
Dynamic representation of functional brain networks involved in the sequence analysis of functional connectivity graphs of the brain (FCGB) gains advances in uncovering evolved interaction mechanisms. However, most of the networks, even the event-related ones, are highly heterogeneous due to spurious interactions, which
[...] Read more.
Dynamic representation of functional brain networks involved in the sequence analysis of functional connectivity graphs of the brain (FCGB) gains advances in uncovering evolved interaction mechanisms. However, most of the networks, even the event-related ones, are highly heterogeneous due to spurious interactions, which bring challenges to revealing the change patterns of interactive information in the complex dynamic process. In this paper, we propose a network entropy (NE) method to measure connectivity uncertainty of FCGB sequences to alleviate the spurious interaction problem in dynamic network analysis to realize associations with different events during a complex cognitive task. The proposed dynamic analysis approach calculated the adjacency matrices from ongoing electroencephalpgram (EEG) in a sliding time-window to form the FCGB sequences. The probability distribution of Shannon entropy was replaced by the connection sequence distribution to measure the uncertainty of FCGB constituting NE. Without averaging, we used time frequency transform of the NE of FCGB sequences to analyze the event-related changes in oscillatory activity in the single-trial traces during the complex cognitive process of driving. Finally, the results of a verification experiment showed that the NE of the FCGB sequences has a certain time-locked performance for different events related to driver fatigue in a prolonged driving task. The time errors between the extracted time of high-power NE and the recorded time of event occurrence were distributed within the range [−30 s, 30 s] and 90.1% of the time errors were distributed within the range [−10 s, 10 s]. The high correlation (r = 0.99997, p < 0.001) between the timing characteristics of the two types of signals indicates that the NE can reflect the actual dynamic interaction states of brain. Thus, the method may have potential implications for cognitive studies and for the detection of physiological states. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Figures

Figure 1

Open AccessArticle Password Security as a Game of Entropies
Entropy 2018, 20(5), 312; https://doi.org/10.3390/e20050312
Received: 27 February 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 25 April 2018
PDF Full-text (791 KB) | HTML Full-text | XML Full-text
Abstract
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability
[...] Read more.
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability of the password (measured by Shannon entropy), opposed to the difficulty for player 2 of guessing it (measured by min-entropy), and the cognitive efforts of player 1 tied to changing the password (measured by relative entropy, i.e., Kullback–Leibler divergence). The model and contribution are thus twofold: (i) it applies multi-objective game theory to the password security problem; and (ii) it introduces different concepts of entropy to measure the quality of a password choice process under different angles (and not a given password itself, since this cannot be quality-assessed in terms of entropy). We illustrate our approach with an example from everyday life, namely we analyze the password choices of employees. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Open AccessArticle Thermodynamic and Economic Analysis of an Integrated Solar Combined Cycle System
Entropy 2018, 20(5), 313; https://doi.org/10.3390/e20050313
Received: 19 March 2018 / Revised: 20 April 2018 / Accepted: 24 April 2018 / Published: 25 April 2018
PDF Full-text (4377 KB) | HTML Full-text | XML Full-text
Abstract
Integrating solar thermal energy into the conventional Combined Cycle Power Plant (CCPP) has been proved to be an efficient way to use solar energy and improve the generation efficiency of CCPP. In this paper, the energy, exergy, and economic (3E) methods were applied
[...] Read more.
Integrating solar thermal energy into the conventional Combined Cycle Power Plant (CCPP) has been proved to be an efficient way to use solar energy and improve the generation efficiency of CCPP. In this paper, the energy, exergy, and economic (3E) methods were applied to the models of the Integrated Solar Combined Cycle System (ISCCS). The performances of the proposed system were not only assessed by energy and exergy efficiency, as well as exergy destruction, but also through varied thermodynamic parameters such as DNI and Ta. Besides, to better understand the real potentials for improving the components, exergy destruction was split into endogenous/exogenous and avoidable/unavoidable parts. Results indicate that the combustion chamber of the gas turbine has the largest endogenous and unavoidable exergy destruction values of 202.23 MW and 197.63 MW, and the values of the parabolic trough solar collector are 51.77 MW and 50.01 MW. For the overall power plant, the exogenous and avoidable exergy destruction rates resulted in 17.61% and 17.78%, respectively. In addition, the proposed system can save a fuel cost of 1.86 $/MW·h per year accompanied by reducing CO2 emissions of about 88.40 kg/MW·h, further highlighting the great potential of ISCCS. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Virtual Network Embedding Based on Graph Entropy
Entropy 2018, 20(5), 315; https://doi.org/10.3390/e20050315
Received: 25 January 2018 / Revised: 8 April 2018 / Accepted: 21 April 2018 / Published: 25 April 2018
PDF Full-text (17206 KB) | HTML Full-text | XML Full-text
Abstract
For embedding virtual networks into a large scale substrate network, a massive amount of time is needed to search the resource space even if the scale of the virtual network is small. The complexity of searching the candidate resource will be reduced if
[...] Read more.
For embedding virtual networks into a large scale substrate network, a massive amount of time is needed to search the resource space even if the scale of the virtual network is small. The complexity of searching the candidate resource will be reduced if candidates in substrate network can be located in a group of particularly matched areas, in which the resource distribution and communication structure of the substrate network exhibit a maximal similarity with the objective virtual network. This work proposes to discover the optimally suitable resource in a substrate network corresponding to the objective virtual network through comparison of their graph entropies. Aiming for this, the substrate network is divided into substructures referring to the importance of nodes in it, and the entropies of these substructures are calculated. The virtual network will be embedded preferentially into the substructure with the closest entropy if the substrate resource satisfies the demand of the virtual network. The experimental results validate that the efficiency of virtual network embedding can be improved through our proposal. Simultaneously, the quality of embedding has been guaranteed without significant degradation. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Figures

Figure 1

Open AccessArticle Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio
Entropy 2018, 20(5), 316; https://doi.org/10.3390/e20050316
Received: 24 February 2018 / Revised: 11 April 2018 / Accepted: 11 April 2018 / Published: 25 April 2018
PDF Full-text (328 KB) | HTML Full-text | XML Full-text
Abstract
The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any
[...] Read more.
The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981) and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Divergence from, and Convergence to, Uniformity of Probability Density Quantiles
Entropy 2018, 20(5), 317; https://doi.org/10.3390/e20050317
Received: 7 March 2018 / Revised: 10 April 2018 / Accepted: 19 April 2018 / Published: 25 April 2018
PDF Full-text (995 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated
[...] Read more.
We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated quantile function. It has earlier been shown that the pdQ is representative of a location-scale family and carries essential information regarding shape and tail behavior of the family. The class of pdQs are densities of continuous distributions with common domain, the unit interval, facilitating metric and semi-metric comparisons. The Kullback–Leibler divergences from uniformity of these pdQs are mapped to illustrate their relative positions with respect to uniformity. To gain more insight into the information that is conserved under the pdQ mapping, we repeatedly apply the pdQ mapping and find that further applications of it are quite generally entropy increasing so convergence to the uniform distribution is investigated. New fixed point theorems are established with elementary probabilistic arguments and illustrated by examples. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Graphical abstract

Open AccessFeature PaperArticle Quantifying Configuration-Sampling Error in Langevin Simulations of Complex Molecular Systems
Entropy 2018, 20(5), 318; https://doi.org/10.3390/e20050318
Received: 12 February 2018 / Revised: 14 April 2018 / Accepted: 15 April 2018 / Published: 26 April 2018
Cited by 1 | PDF Full-text (2989 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the
[...] Read more.
While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the use of a finite integration timestep. Sivak et al., introduced a convenient approach to approximating a natural measure of error between the sampled density and the target equilibrium density, the Kullback-Leibler (KL) divergence, in phase space, but did not specifically address the issue of configuration-space properties, which are much more commonly of interest in molecular simulations. Here, we introduce a variant of this near-equilibrium estimator capable of measuring the error in the configuration-space marginal density, validating it against a complex but exact nested Monte Carlo estimator to show that it reproduces the KL divergence with high fidelity. To illustrate its utility, we employ this new near-equilibrium estimator to assess a claim that a recently proposed Langevin integrator introduces extremely small configuration-space density errors up to the stability limit at no extra computational expense. Finally, we show how this approach to quantifying sampling bias can be applied to a wide variety of stochastic integrators by following a straightforward procedure to compute the appropriate shadow work, and describe how it can be extended to quantify the error in arbitrary marginal or conditional distributions of interest. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Graphical abstract

Open AccessArticle Distributed One Time Password Infrastructure for Linux Environments
Entropy 2018, 20(5), 319; https://doi.org/10.3390/e20050319
Received: 2 March 2018 / Revised: 14 April 2018 / Accepted: 23 April 2018 / Published: 26 April 2018
PDF Full-text (1250 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, there is a lot of critical information and services hosted on computer systems. The proper access control to these resources is essential to avoid malicious actions that could cause huge losses to home and professional users. The access control systems have evolved
[...] Read more.
Nowadays, there is a lot of critical information and services hosted on computer systems. The proper access control to these resources is essential to avoid malicious actions that could cause huge losses to home and professional users. The access control systems have evolved from the first password based systems to the modern mechanisms using smart cards, certificates, tokens, biometric systems, etc. However, when designing a system, it is necessary to take into account their particular limitations, such as connectivity, infrastructure or budget. In addition, one of the main objectives must be to ensure the system usability, but this property is usually orthogonal to the security. Thus, the use of password is still common. In this paper, we expose a new password based access control system that aims to improve password security with the minimum impact in the system usability. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle The Relationship between Postural Stability and Lower-Limb Muscle Activity Using an Entropy-Based Similarity Index
Entropy 2018, 20(5), 320; https://doi.org/10.3390/e20050320
Received: 28 March 2018 / Revised: 14 April 2018 / Accepted: 21 April 2018 / Published: 26 April 2018
PDF Full-text (2781 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this study is to see if the centre of pressure (COP) measurements on the postural stability can be used to represent the electromyography (EMG) measurement on the activity data of lower limb muscles. If so, the cost-effective COP data measurements
[...] Read more.
The aim of this study is to see if the centre of pressure (COP) measurements on the postural stability can be used to represent the electromyography (EMG) measurement on the activity data of lower limb muscles. If so, the cost-effective COP data measurements can be used to indicate the level of postural stability and lower limb muscle activity. The Hilbert–Huang Transform method was used to analyse the data from the experimental designed to examine the correlation between lower-limb muscles and postural stability. We randomly selected 24 university students to participate in eight scenarios and simultaneously measured their COP and EMG signals during the experiments. The Empirical Mode Decomposition was used to identify the intrinsic-mode functions (IMF) that can distinguish between the COP and EMG at different states. Subsequently, similarity indices and synchronization analyses were used to calculate the correlation between the lower-limb muscle strength and the postural stability. The IMF5 of the COP signals and the IMF6 of the EMG signals were not significantly different and the average frequency was 0.8 Hz, with a range of 0–2 Hz. When the postural stability was poor, the COP and EMG had a high synchronization with index values within the range of 0.010–0.015. With good postural stability, the synchronization indices were between 0.006 and 0.080 and both exhibited low synchronization. The COP signals and the low frequency EMG signals were highly correlated. In conclusion, we demonstrated that the COP may provide enough information on postural stability without the EMG data. Full article
Figures

Figure 1

Open AccessArticle Finite Difference Method for Time-Space Fractional Advection–Diffusion Equations with Riesz Derivative
Entropy 2018, 20(5), 321; https://doi.org/10.3390/e20050321
Received: 1 March 2018 / Revised: 20 April 2018 / Accepted: 20 April 2018 / Published: 26 April 2018
PDF Full-text (1710 KB) | HTML Full-text | XML Full-text
Abstract
In this article, a numerical scheme is formulated and analysed to solve the time-space fractional advection–diffusion equation, where the Riesz derivative and the Caputo derivative are considered in spatial and temporal directions, respectively. The Riesz space derivative is approximated by the second-order fractional
[...] Read more.
In this article, a numerical scheme is formulated and analysed to solve the time-space fractional advection–diffusion equation, where the Riesz derivative and the Caputo derivative are considered in spatial and temporal directions, respectively. The Riesz space derivative is approximated by the second-order fractional weighted and shifted Grünwald–Letnikov formula. Based on the equivalence between the fractional differential equation and the integral equation, we have transformed the fractional differential equation into an equivalent integral equation. Then, the integral is approximated by the trapezoidal formula. Further, the stability and convergence analysis are discussed rigorously. The resulting scheme is formally proved with the second order accuracy both in space and time. Numerical experiments are also presented to verify the theoretical analysis. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle A New Two-Dimensional Map with Hidden Attractors
Entropy 2018, 20(5), 322; https://doi.org/10.3390/e20050322
Received: 31 January 2018 / Revised: 21 March 2018 / Accepted: 24 April 2018 / Published: 27 April 2018
PDF Full-text (5288 KB) | HTML Full-text | XML Full-text
Abstract
The investigations of hidden attractors are mainly in continuous-time dynamic systems, and there are a few investigations of hidden attractors in discrete-time dynamic systems. The classical chaotic attractors of the Logistic map, Tent map, Henon map, Arnold’s cat map, and other widely-known chaotic
[...] Read more.
The investigations of hidden attractors are mainly in continuous-time dynamic systems, and there are a few investigations of hidden attractors in discrete-time dynamic systems. The classical chaotic attractors of the Logistic map, Tent map, Henon map, Arnold’s cat map, and other widely-known chaotic attractors are those excited from unstable fixed points. In this paper, the hidden dynamics of a new two-dimensional map inspired by Arnold’s cat map is investigated, and the existence of fixed points and their stabilities are studied in detail. Full article
Figures

Figure 1

Open AccessArticle Transfer Information Energy: A Quantitative Indicator of Information Transfer between Time Series
Entropy 2018, 20(5), 323; https://doi.org/10.3390/e20050323
Received: 26 February 2018 / Revised: 19 April 2018 / Accepted: 25 April 2018 / Published: 27 April 2018
PDF Full-text (1250 KB) | HTML Full-text | XML Full-text
Abstract
We introduce an information-theoretical approach for analyzing information transfer between time series. Rather than using the Transfer Entropy (TE), we define and apply the Transfer Information Energy (TIE), which is based on Onicescu’s Information Energy. Whereas the TE can be used as a
[...] Read more.
We introduce an information-theoretical approach for analyzing information transfer between time series. Rather than using the Transfer Entropy (TE), we define and apply the Transfer Information Energy (TIE), which is based on Onicescu’s Information Energy. Whereas the TE can be used as a measure of the reduction in uncertainty about one time series given another, the TIE may be viewed as a measure of the increase in certainty about one time series given another. We compare the TIE and the TE in two known time series prediction applications. First, we analyze stock market indexes from the Americas, Asia/Pacific and Europe, with the goal to infer the information transfer between them (i.e., how they influence each other). In the second application, we take a bivariate time series of the breath rate and instantaneous heart rate of a sleeping human suffering from sleep apnea, with the goal to determine the information transfer heart → breath vs. breath → heart. In both applications, the computed TE and TIE values are strongly correlated, meaning that the TIE can substitute the TE for such applications, even if they measure symmetric phenomena. The advantage of using the TIE is computational: we can obtain similar results, but faster. Full article
Figures

Figure 1

Open AccessArticle Application of Multiscale Entropy in Mechanical Fault Diagnosis of High Voltage Circuit Breaker
Entropy 2018, 20(5), 325; https://doi.org/10.3390/e20050325
Received: 22 February 2018 / Revised: 25 April 2018 / Accepted: 26 April 2018 / Published: 28 April 2018
PDF Full-text (3557 KB) | HTML Full-text | XML Full-text
Abstract
Mechanical fault diagnosis of a circuit breaker can help improve the reliability of power systems. Therefore, a new method based on multiscale entropy (MSE) and the support vector machine (SVM) is proposed to diagnose the fault in high voltage circuit breakers. First, Variational
[...] Read more.
Mechanical fault diagnosis of a circuit breaker can help improve the reliability of power systems. Therefore, a new method based on multiscale entropy (MSE) and the support vector machine (SVM) is proposed to diagnose the fault in high voltage circuit breakers. First, Variational Mode Decomposition (VMD) is used to process the high voltage circuit breaker’s vibration signals, and the reconstructed signal can eliminate the effect of noise. Second, the multiscale entropy of the reconstructed signal is calculated and selected as a feature vector. Finally, based on the feature vector, the fault identification and classification are realized by SVM. The feature vector constructed by multiscale entropy is compared with other feature vectors to illustrate the superiority of the proposed method. Through experimentation on a 35 kV SF6 circuit breaker, the feasibility and applicability of the proposed method for fault diagnosis are verified. Full article
Figures

Figure 1

Open AccessArticle Quantization and Bifurcation beyond Square-Integrable Wavefunctions
Entropy 2018, 20(5), 327; https://doi.org/10.3390/e20050327
Received: 27 March 2018 / Revised: 22 April 2018 / Accepted: 26 April 2018 / Published: 29 April 2018
PDF Full-text (7322 KB) | HTML Full-text | XML Full-text
Abstract
Probability interpretation is the cornerstone of standard quantum mechanics. To ensure the validity of the probability interpretation, wavefunctions have to satisfy the square-integrable (SI) condition, which gives rise to the well-known phenomenon of energy quantization in confined quantum systems. On the other hand,
[...] Read more.
Probability interpretation is the cornerstone of standard quantum mechanics. To ensure the validity of the probability interpretation, wavefunctions have to satisfy the square-integrable (SI) condition, which gives rise to the well-known phenomenon of energy quantization in confined quantum systems. On the other hand, nonsquare-integrable (NSI) solutions to the Schrödinger equation are usually ruled out and have long been believed to be irrelevant to energy quantization. This paper proposes a quantum-trajectory approach to energy quantization by releasing the SI condition and considering both SI and NSI solutions to the Schrödinger equation. Contrary to our common belief, we find that both SI and NSI wavefunctions contribute to energy quantization. SI wavefunctions help to locate the bifurcation points at which energy has a step jump, while NSI wavefunctions form the flat parts of the stair-like distribution of the quantized energies. The consideration of NSI wavefunctions furthermore reveals a new quantum phenomenon regarding the synchronicity between the energy quantization process and the center-saddle bifurcation process. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Figures

Figure 1

Open AccessArticle The Gibbs Paradox: Lessons from Thermodynamics
Entropy 2018, 20(5), 328; https://doi.org/10.3390/e20050328
Received: 31 March 2018 / Revised: 15 April 2018 / Accepted: 27 April 2018 / Published: 30 April 2018
PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
The Gibbs paradox in statistical mechanics is often taken to indicate that already in the classical domain particles should be treated as fundamentally indistinguishable. This paper shows, on the contrary, how one can recover the thermodynamical account of the entropy of mixing, while
[...] Read more.
The Gibbs paradox in statistical mechanics is often taken to indicate that already in the classical domain particles should be treated as fundamentally indistinguishable. This paper shows, on the contrary, how one can recover the thermodynamical account of the entropy of mixing, while treating states that only differ by permutations of similar particles as distinct. By reference to the orthodox theory of thermodynamics, it is argued that entropy differences are only meaningful if they are related to reversible processes connecting the initial and final state. For mixing processes, this means that processes should be considered in which particle number is allowed to vary. Within the context of statistical mechanics, the Gibbsian grandcanonical ensemble is a suitable device for describing such processes. It is shown how the grandcanonical entropy relates in the appropriate way to changes of other thermodynamical quantities in reversible processes, and how the thermodynamical account of the entropy of mixing is recovered even when treating the particles as distinguishable. Full article
(This article belongs to the Special Issue Gibbs Paradox 2018)
Open AccessArticle Minimum Penalized ϕ-Divergence Estimation under Model Misspecification
Entropy 2018, 20(5), 329; https://doi.org/10.3390/e20050329
Received: 8 March 2018 / Revised: 22 April 2018 / Accepted: 27 April 2018 / Published: 30 April 2018
PDF Full-text (329 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An
[...] Read more.
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An application of the results obtained shows that a parametric bootstrap consistently estimates the null distribution of a certain class of test statistics for model misspecification detection. An illustrative application to the accuracy assessment of the thematic quality in a global land cover map is included. Full article
Open AccessArticle Continuity of Channel Parameters and Operations under Various DMC Topologies
Entropy 2018, 20(5), 330; https://doi.org/10.3390/e20050330
Received: 27 March 2018 / Revised: 17 April 2018 / Accepted: 27 April 2018 / Published: 30 April 2018
PDF Full-text (388 KB) | HTML Full-text | XML Full-text
Abstract
We study the continuity of many channel parameters and operations under various topologies on the space of equivalent discrete memoryless channels (DMC). We show that mutual information, channel capacity, Bhattacharyya parameter, probability of error of a fixed code and optimal probability of error
[...] Read more.
We study the continuity of many channel parameters and operations under various topologies on the space of equivalent discrete memoryless channels (DMC). We show that mutual information, channel capacity, Bhattacharyya parameter, probability of error of a fixed code and optimal probability of error for a given code rate and block length are continuous under various DMC topologies. We also show that channel operations such as sums, products, interpolations and Arıkan-style transformations are continuous. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Non-Gaussian Systems Control Performance Assessment Based on Rational Entropy
Entropy 2018, 20(5), 331; https://doi.org/10.3390/e20050331
Received: 8 February 2018 / Revised: 30 April 2018 / Accepted: 30 April 2018 / Published: 1 May 2018
PDF Full-text (2890 KB) | HTML Full-text | XML Full-text
Abstract
Control loop Performance Assessment (CPA) plays an important role in system operations. Stochastic statistical CPA index, such as a minimum variance controller (MVC)-based CPA index, is one of the most widely used CPA indices. In this paper, a new minimum entropy controller (MEC)-based
[...] Read more.
Control loop Performance Assessment (CPA) plays an important role in system operations. Stochastic statistical CPA index, such as a minimum variance controller (MVC)-based CPA index, is one of the most widely used CPA indices. In this paper, a new minimum entropy controller (MEC)-based CPA method of linear non-Gaussian systems is proposed. In this method, probability density function (PDF) and rational entropy (RE) are respectively used to describe the characteristics and the uncertainty of random variables. To better estimate the performance benchmark, an improved EDA algorithm, which is used to estimate the system parameters and noise PDF, is given. The effectiveness of the proposed method is illustrated through case studies on an ARMAX system. Full article
Figures

Figure 1

Open AccessArticle Applying Discrete Homotopy Analysis Method for Solving Fractional Partial Differential Equations
Entropy 2018, 20(5), 332; https://doi.org/10.3390/e20050332
Received: 3 March 2018 / Revised: 27 April 2018 / Accepted: 27 April 2018 / Published: 1 May 2018
PDF Full-text (2619 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we developed a space discrete version of the homotopy analysis method (DHAM) to find the solutions of linear and nonlinear fractional partial differential equations with time derivative α (0<α1) . The DHAM contains
[...] Read more.
In this paper we developed a space discrete version of the homotopy analysis method (DHAM) to find the solutions of linear and nonlinear fractional partial differential equations with time derivative α   ( 0 < α 1 ) . The DHAM contains the auxiliary parameter , which provides a simple way to guarantee the convergence region of solution series. The efficiency and accuracy of the proposed method is demonstrated by test problems with initial conditions. The results obtained are compared with the exact solutions when α = 1 . It is shown they are in good agreement with each other. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1a

Open AccessArticle Principal Curves for Statistical Divergences and an Application to Finance
Entropy 2018, 20(5), 333; https://doi.org/10.3390/e20050333
Received: 22 February 2018 / Revised: 3 April 2018 / Accepted: 4 April 2018 / Published: 2 May 2018
PDF Full-text (286 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a method for the beta pricing model under the consideration of non-Gaussian returns by means of a generalization of the mean-variance model and the use of principal curves to define a divergence model for the optimization of the pricing model.
[...] Read more.
This paper proposes a method for the beta pricing model under the consideration of non-Gaussian returns by means of a generalization of the mean-variance model and the use of principal curves to define a divergence model for the optimization of the pricing model. We rely on the q-exponential model so consider the properties of the divergences which are used to describe the statistical model and fully characterize the behavior of the assets. We derive the minimum divergence portfolio, which generalizes the Markowitz’s (mean-divergence) approach and relying on the information geometrical aspects of the distributions the Capital Asset Pricing Model (CAPM) is then derived under the geometrical characterization of the distributions which model the data, all by the consideration of principal curves approach. We discuss the possibility of integration of our model into an adaptive procedure that can be used for the search of optimum points on finance applications. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Effective Boundary Slip Induced by Surface Roughness and Their Coupled Effect on Convective Heat Transfer of Liquid Flow
Entropy 2018, 20(5), 334; https://doi.org/10.3390/e20050334
Received: 29 March 2018 / Revised: 22 April 2018 / Accepted: 25 April 2018 / Published: 2 May 2018
PDF Full-text (1517 KB) | HTML Full-text | XML Full-text
Abstract
As a significant interfacial property for micro/nano fluidic system, the effective boundary slip can be induced by the surface roughness. However, the effect of surface roughness on the effective slip is still not clear, both increased and decreased effective boundary slip were found
[...] Read more.
As a significant interfacial property for micro/nano fluidic system, the effective boundary slip can be induced by the surface roughness. However, the effect of surface roughness on the effective slip is still not clear, both increased and decreased effective boundary slip were found with increased roughness. The present work develops a simplified model to study the effect of surface roughness on the effective boundary slip. In the created rough models, the reference position of the rough surfaces to determinate effective boundary slip was set based on ISO/ASME standard and the surface roughness parameters including Ra (arithmetical mean deviation of the assessed profile), Rsm (mean width of the assessed profile elements) and shape of the texture varied to form different surface roughness. Then, the effective boundary slip of fluid flow through the rough surface was analyzed by using COMSOL 5.3. The results show that the effective boundary slip induced by surface roughness of fully wetted rough surface keeps negative and further decreases with increasing Ra or decreasing Rsm. Different shape of roughness texture also results in different effective slip. A simplified corrected method for the measured effective boundary slip was developed and proved to be efficient when the Rsm is no larger than 200 nm. Another important finding in the present work is that the convective heat transfer firstly increases followed by an unobvious change with increasing Ra, while the effective boundary slip keeps decreasing. It is believed that the increasing Ra enlarges the area of solid-liquid interface for convective heat transfer, however, when Ra is large enough, the decreasing roughness-induced effective boundary slip counteracts the enhancement effect of roughness itself on the convective heat transfer. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Figures

Figure 1

Open AccessArticle What Constitutes Emergent Quantum Reality? A Complex System Exploration from Entropic Gravity and the Universal Constants
Entropy 2018, 20(5), 335; https://doi.org/10.3390/e20050335
Received: 29 March 2018 / Revised: 26 April 2018 / Accepted: 30 April 2018 / Published: 2 May 2018
PDF Full-text (256 KB) | HTML Full-text | XML Full-text
Abstract
In this work, it is acknowledged that important attempts to devise an emergent quantum (gravity) theory require space-time to be discretized at the Planck scale. It is therefore conjectured that reality is identical to a sub-quantum dynamics of ontological micro-constituents that are connected
[...] Read more.
In this work, it is acknowledged that important attempts to devise an emergent quantum (gravity) theory require space-time to be discretized at the Planck scale. It is therefore conjectured that reality is identical to a sub-quantum dynamics of ontological micro-constituents that are connected by a single interaction law. To arrive at a complex system-based toy-model identification of these micro-constituents, two strategies are combined. First, by seeing gravity as an entropic phenomenon and generalizing the dimensional reduction of the associated holographic principle, the universal constants of free space are related to assumed attributes of the micro-constituents. Second, as the effective field dynamics of the micro-constituents must eventually obey Einstein’s field equations, a sub-quantum interaction law is derived from a solution of these equations. A Planck-scale origin for thermodynamic black hole characteristics and novel views on entropic gravity theory result from this approach, which eventually provides a different view on quantum gravity and its unification with the fundamental forces. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Open AccessArticle Entropy Generation Analysis and Natural Convection in a Nanofluid-Filled Square Cavity with a Concentric Solid Insert and Different Temperature Distributions
Entropy 2018, 20(5), 336; https://doi.org/10.3390/e20050336
Received: 1 April 2018 / Revised: 29 April 2018 / Accepted: 30 April 2018 / Published: 3 May 2018
PDF Full-text (2678 KB) | HTML Full-text | XML Full-text
Abstract
The problem of entropy generation analysis and natural convection in a nanofluid square cavity with a concentric solid insert and different temperature distributions is studied numerically by the finite difference method. An isothermal heater is placed on the bottom wall while isothermal cold
[...] Read more.
The problem of entropy generation analysis and natural convection in a nanofluid square cavity with a concentric solid insert and different temperature distributions is studied numerically by the finite difference method. An isothermal heater is placed on the bottom wall while isothermal cold sources are distributed along the top and side walls of the square cavity. The remainder of these walls are kept adiabatic. Water-based nanofluids with Al 2 O 3 nanoparticles are chosen for the investigation. The governing dimensionless parameters of this study are the nanoparticles volume fraction ( 0 ϕ 0.09 ), Rayleigh number ( 10 3 R a 10 6 ) , thermal conductivity ratio ( 0.44 K r 23.8 ) and length of the inner solid ( 0 D 0.7 ). Comparisons with previously experimental and numerical published works verify a very good agreement with the proposed numerical method. Numerical results are presented graphically in the form of streamlines, isotherms and local entropy generation as well as the local and average Nusselt numbers. The obtained results indicate that the thermal conductivity ratio and the inner solid size are excellent control parameters for an optimization of heat transfer and Bejan number within the fully heated and partially cooled square cavity. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessFeature PaperArticle Lyapunov Exponents of a Discontinuous 4D Hyperchaotic System of Integer or Fractional Order
Entropy 2018, 20(5), 337; https://doi.org/10.3390/e20050337
Received: 18 April 2018 / Revised: 30 April 2018 / Accepted: 30 April 2018 / Published: 3 May 2018
PDF Full-text (8814 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the dynamics of local finite-time Lyapunov exponents of a 4D hyperchaotic system of integer or fractional order with a discontinuous right-hand side and as an initial value problem, are investigated graphically. It is shown that a discontinuous system of integer
[...] Read more.
In this paper, the dynamics of local finite-time Lyapunov exponents of a 4D hyperchaotic system of integer or fractional order with a discontinuous right-hand side and as an initial value problem, are investigated graphically. It is shown that a discontinuous system of integer or fractional order cannot be numerically integrated using methods for continuous differential equations. A possible approach for discontinuous systems is presented. To integrate the initial value problem of fractional order or integer order, the discontinuous system is continuously approximated via Filippov’s regularization and Cellina’s Theorem. The Lyapunov exponents of the approximated system of integer or fractional order are represented as a function of two variables: as a function of two parameters, or as a function of the fractional order and one parameter, respectively. The obtained three-dimensional representation leads to comprehensive conclusions regarding the nature, differences and sign of the Lyapunov exponents in both integer order and fractional order cases. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Pressure-Volume Work for Metastable Liquid and Solid at Zero Pressure
Entropy 2018, 20(5), 338; https://doi.org/10.3390/e20050338
Received: 15 February 2018 / Revised: 14 March 2018 / Accepted: 20 April 2018 / Published: 3 May 2018
PDF Full-text (1448 KB) | HTML Full-text | XML Full-text
Abstract
Unlike with gases, for liquids and solids the pressure of a system can be not only positive, but also negative, or even zero. Upon isobaric heat exchange (heating or cooling) at p = 0, the volume work (p-V) should be zero,
[...] Read more.
Unlike with gases, for liquids and solids the pressure of a system can be not only positive, but also negative, or even zero. Upon isobaric heat exchange (heating or cooling) at p = 0, the volume work (p-V) should be zero, assuming the general validity of traditional δW = dWp = −pdV equality. This means that at zero pressure, a special process can be realized; a macroscopic change of volume achieved by isobaric heating/cooling without any work done by the system on its surroundings or by the surroundings on the system. A neologism is proposed for these dWp = 0 (and in general, also for non-trivial δW = 0 and W = 0) processes: “aergiatic” (from Greek: Ἀεργία, “inactivity”). In this way, two phenomenologically similar processes—adiabatic without any heat exchange, and aergiatic without any work—would have matching, but well-distinguishable terms. Full article
Figures

Figure 1

Open AccessArticle Secure and Reliable Key Agreement with Physical Unclonable Functions
Entropy 2018, 20(5), 340; https://doi.org/10.3390/e20050340
Received: 25 March 2018 / Revised: 17 April 2018 / Accepted: 27 April 2018 / Published: 3 May 2018
PDF Full-text (638 KB) | HTML Full-text | XML Full-text
Abstract
Different transforms used in binding a secret key to correlated physical-identifier outputs are compared. Decorrelation efficiency is the metric used to determine transforms that give highly-uncorrelated outputs. Scalar quantizers are applied to transform outputs to extract uniformly distributed bit sequences to which secret
[...] Read more.
Different transforms used in binding a secret key to correlated physical-identifier outputs are compared. Decorrelation efficiency is the metric used to determine transforms that give highly-uncorrelated outputs. Scalar quantizers are applied to transform outputs to extract uniformly distributed bit sequences to which secret keys are bound. A set of transforms that perform well in terms of the decorrelation efficiency is applied to ring oscillator (RO) outputs to improve the uniqueness and reliability of extracted bit sequences, to reduce the hardware area and information leakage about the key and RO outputs, and to maximize the secret-key length. Low-complexity error-correction codes are proposed to illustrate two complete key-binding systems with perfect secrecy, and better secret-key and privacy-leakage rates than existing methods. A reference hardware implementation is also provided to demonstrate that the transform-coding approach occupies a small hardware area. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle M-SAC-VLADNet: A Multi-Path Deep Feature Coding Model for Visual Classification
Entropy 2018, 20(5), 341; https://doi.org/10.3390/e20050341
Received: 19 March 2018 / Revised: 21 April 2018 / Accepted: 26 April 2018 / Published: 4 May 2018
PDF Full-text (929 KB) | HTML Full-text | XML Full-text
Abstract
Vector of locally aggregated descriptor (VLAD) coding has become an efficient feature coding model for retrieval and classification. In some recent works, the VLAD coding method is extended to a deep feature coding model which is called NetVLAD. NetVLAD improves significantly over the
[...] Read more.
Vector of locally aggregated descriptor (VLAD) coding has become an efficient feature coding model for retrieval and classification. In some recent works, the VLAD coding method is extended to a deep feature coding model which is called NetVLAD. NetVLAD improves significantly over the original VLAD method. Although the NetVLAD model has shown its potential for retrieval and classification, the discriminative ability is not fully researched. In this paper, we propose a new end-to-end feature coding network which is more discriminative than the NetVLAD model. First, we propose a sparsely-adaptive and covariance VLAD model. Next, we derive the back propagation models of all the proposed layers and extend the proposed feature coding model to an end-to-end neural network. Finally, we construct a multi-path feature coding network which aggregates multiple newly-designed feature coding networks for visual classification. Some experimental results show that our feature coding network is very effective for visual classification. Full article
Figures

Figure 1a

Open AccessArticle The Agent-Based Model and Simulation of Sexual Selection and Pair Formation Mechanisms
Entropy 2018, 20(5), 342; https://doi.org/10.3390/e20050342
Received: 11 April 2018 / Revised: 1 May 2018 / Accepted: 2 May 2018 / Published: 4 May 2018
PDF Full-text (3093 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the agent-based simulation model of sexual selection and pair formation mechanisms is proposed. Sexual selection is a mechanism that occurs when the numbers of individuals of both sexes are almost identical, while reproduction costs for one of the sexes are
[...] Read more.
In this paper, the agent-based simulation model of sexual selection and pair formation mechanisms is proposed. Sexual selection is a mechanism that occurs when the numbers of individuals of both sexes are almost identical, while reproduction costs for one of the sexes are much higher. The mechanism of creating pairs allows individuals to form stable, reproducing pairs. Simulation experiments carried out using the proposed agent-based model, and several fitness landscapes were aimed at verifying whether sexual selection and the mechanism of pair formation can trigger sympatric speciation and whether they can promote and maintain population diversity. Experiments were mainly focused on the mechanism of pair formation and its impact on speciation and population diversity. Results of conducted experiments show that sexual selection can start speciation processes and maintain the population diversity. The mechanism of creating pairs, when it occurs along with sexual selection, has a significant impact on the course of speciation and maintenance of population diversity. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Topological Structures on DMC Spaces
Entropy 2018, 20(5), 343; https://doi.org/10.3390/e20050343
Received: 25 March 2018 / Revised: 19 April 2018 / Accepted: 27 April 2018 / Published: 4 May 2018
PDF Full-text (469 KB) | HTML Full-text | XML Full-text
Abstract
Two channels are said to be equivalent if they are degraded from each other. The space of equivalent channels with input alphabet X and output alphabet Y can be naturally endowed with the quotient of the Euclidean topology by the equivalence relation. A
[...] Read more.
Two channels are said to be equivalent if they are degraded from each other. The space of equivalent channels with input alphabet X and output alphabet Y can be naturally endowed with the quotient of the Euclidean topology by the equivalence relation. A topology on the space of equivalent channels with fixed input alphabet X and arbitrary but finite output alphabet is said to be natural if and only if it induces the quotient topology on the subspaces of equivalent channels sharing the same output alphabet. We show that every natural topology is σ -compact, separable and path-connected. The finest natural topology, which we call the strong topology, is shown to be compactly generated, sequential and T 4 . On the other hand, the strong topology is not first-countable anywhere, hence it is not metrizable. We introduce a metric distance on the space of equivalent channels which compares the noise levels between channels. The induced metric topology, which we call the noisiness topology, is shown to be natural. We also study topologies that are inherited from the space of meta-probability measures by identifying channels with their Blackwell measures. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle A New Local Fractional Entropy-Based Model for Kidney MRI Image Enhancement
Entropy 2018, 20(5), 344; https://doi.org/10.3390/e20050344
Received: 12 April 2018 / Revised: 2 May 2018 / Accepted: 3 May 2018 / Published: 5 May 2018
PDF Full-text (3500 KB) | HTML Full-text | XML Full-text
Abstract
Kidney image enhancement is challenging due to the unpredictable quality of MRI images, as well as the nature of kidney diseases. The focus of this work is on kidney images enhancement by proposing a new Local Fractional Entropy (LFE)-based model. The proposed model
[...] Read more.
Kidney image enhancement is challenging due to the unpredictable quality of MRI images, as well as the nature of kidney diseases. The focus of this work is on kidney images enhancement by proposing a new Local Fractional Entropy (LFE)-based model. The proposed model estimates the probability of pixels that represent edges based on the entropy of the neighboring pixels, which results in local fractional entropy. When there is a small change in the intensity values (indicating the presence of edge in the image), the local fractional entropy gives fine image details. Similarly, when no change in intensity values is present (indicating smooth texture), the LFE does not provide fine details, based on the fact that there is no edge information. Tests were conducted on a large dataset of different, poor-quality kidney images to show that the proposed model is useful and effective. A comparative study with the classical methods, coupled with the latest enhancement methods, shows that the proposed model outperforms the existing methods. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Remote Sensing Extraction Method of Tailings Ponds in Ultra-Low-Grade Iron Mining Area Based on Spectral Characteristics and Texture Entropy
Entropy 2018, 20(5), 345; https://doi.org/10.3390/e20050345
Received: 31 January 2018 / Revised: 19 March 2018 / Accepted: 5 May 2018 / Published: 6 May 2018
PDF Full-text (3839 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid development of the steel and iron industry, ultra-low-grade iron ore has been developed extensively since the beginning of this century in China. Due to the high concentration ratio of the iron ore, a large amount of tailings was produced and
[...] Read more.
With the rapid development of the steel and iron industry, ultra-low-grade iron ore has been developed extensively since the beginning of this century in China. Due to the high concentration ratio of the iron ore, a large amount of tailings was produced and many tailings ponds were established in the mining area. This poses a great threat to regional safety and the environment because of dam breaks and metal pollution. The spatial distribution is the basic information for monitoring the status of tailings ponds. Taking Changhe Mining Area as an example, tailings ponds were extracted by using Landsat 8 OLI images based on both spectral and texture characteristics. Firstly, ultra-low-grade iron-related objects (i.e., tailings and iron ore) were extracted by the Ultra-low-grade Iron-related Objects Index (ULIOI) with a threshold. Secondly, the tailings pond was distinguished from the stope due to their entropy difference in the panchromatic image at a 7 × 7 window size. This remote sensing method could be beneficial to safety and environmental management in the mining area. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Figure 1

Open AccessArticle Time-Fractional Diffusion with Mass Absorption in a Half-Line Domain due to Boundary Value of Concentration Varying Harmonically in Time
Entropy 2018, 20(5), 346; https://doi.org/10.3390/e20050346
Received: 23 April 2018 / Revised: 3 May 2018 / Accepted: 3 May 2018 / Published: 6 May 2018
PDF Full-text (378 KB) | HTML Full-text | XML Full-text
Abstract
The time-fractional diffusion equation with mass absorption is studied in a half-line domain under the Dirichlet boundary condition varying harmonically in time. The Caputo derivative is employed. The solution is obtained using the Laplace transform with respect to time and the sin-Fourier transform
[...] Read more.
The time-fractional diffusion equation with mass absorption is studied in a half-line domain under the Dirichlet boundary condition varying harmonically in time. The Caputo derivative is employed. The solution is obtained using the Laplace transform with respect to time and the sin-Fourier transform with respect to the spatial coordinate. The results of numerical calculations are illustrated graphically. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle A Generalized Relative (α, β)-Entropy: Geometric Properties and Applications to Robust Statistical Inference
Entropy 2018, 20(5), 347; https://doi.org/10.3390/e20050347
Received: 30 March 2018 / Revised: 22 April 2018 / Accepted: 1 May 2018 / Published: 6 May 2018
PDF Full-text (449 KB) | HTML Full-text | XML Full-text
Abstract
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are
[...] Read more.
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are popular among statisticians as many of the corresponding minimum divergence methods lead to robust inference in the presence of outliers in the observed data; examples include the ϕ -divergence, the density power divergence, the logarithmic density power divergence and the recently developed family of logarithmic super divergence (LSD). In this paper, we will present an alternative information theoretic formulation of the LSD measures as a two-parameter generalization of the relative α -entropy, which we refer to as the general ( α , β ) -entropy. We explore its relation with various other entropies and divergences, which also generates a two-parameter extension of Renyi entropy measure as a by-product. This paper is primarily focused on the geometric properties of the relative ( α , β ) -entropy or the LSD measures; we prove their continuity and convexity in both the arguments along with an extended Pythagorean relation under a power-transformation of the domain space. We also derive a set of sufficient conditions under which the forward and the reverse projections of the relative ( α , β ) -entropy exist and are unique. Finally, we briefly discuss the potential applications of the relative ( α , β ) -entropy or the LSD measures in statistical inference, in particular, for robust parameter estimation and hypothesis testing. Our results on the reverse projection of the relative ( α , β ) -entropy establish, for the first time, the existence and uniqueness of the minimum LSD estimators. Numerical illustrations are also provided for the problem of estimating the binomial parameter. Full article
Open AccessArticle Robot Evaluation and Selection with Entropy-Based Combination Weighting and Cloud TODIM Approach
Entropy 2018, 20(5), 349; https://doi.org/10.3390/e20050349
Received: 13 April 2018 / Revised: 2 May 2018 / Accepted: 7 May 2018 / Published: 7 May 2018
PDF Full-text (1078 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays robots have been commonly adopted in various manufacturing industries to improve product quality and productivity. The selection of the best robot to suit a specific production setting is a difficult decision making task for manufacturers because of the increase in complexity and
[...] Read more.
Nowadays robots have been commonly adopted in various manufacturing industries to improve product quality and productivity. The selection of the best robot to suit a specific production setting is a difficult decision making task for manufacturers because of the increase in complexity and number of robot systems. In this paper, we explore two key issues of robot evaluation and selection: the representation of decision makers’ diversified assessments and the determination of the ranking of available robots. Specifically, a decision support model which utilizes cloud model and TODIM (an acronym in Portuguese of interactive and multiple criteria decision making) method is developed for the purpose of handling robot selection problems with hesitant linguistic information. Besides, we use an entropy-based combination weighting technique to estimate the weights of evaluation criteria. Finally, we illustrate the proposed cloud TODIM approach with a robot selection example for an automobile manufacturer, and further validate its effectiveness and benefits via a comparative analysis. The results show that the proposed robot selection model has some unique advantages, which is more realistic and flexible for robot selection under a complex and uncertain environment. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle A No-Go Theorem for Observer-Independent Facts
Entropy 2018, 20(5), 350; https://doi.org/10.3390/e20050350
Received: 5 April 2018 / Revised: 29 April 2018 / Accepted: 2 May 2018 / Published: 8 May 2018
PDF Full-text (449 KB) | HTML Full-text | XML Full-text
Abstract
In his famous thought experiment, Wigner assigns an entangled state to the composite quantum system made up of Wigner’s friend and her observed system. While the two of them have different accounts of the process, each Wigner and his friend can in principle
[...] Read more.
In his famous thought experiment, Wigner assigns an entangled state to the composite quantum system made up of Wigner’s friend and her observed system. While the two of them have different accounts of the process, each Wigner and his friend can in principle verify his/her respective state assignments by performing an appropriate measurement. As manifested through a click in a detector or a specific position of the pointer, the outcomes of these measurements can be regarded as reflecting directly observable “facts”. Reviewing arXiv:1507.05255, I will derive a no-go theorem for observer-independent facts, which would be common both for Wigner and the friend. I will then analyze this result in the context of a newly-derived theorem arXiv:1604.07422, where Frauchiger and Renner prove that “single-world interpretations of quantum theory cannot be self-consistent”. It is argued that “self-consistency” has the same implications as the assumption that observational statements of different observers can be compared in a single (and hence an observer-independent) theoretical framework. The latter, however, may not be possible, if the statements are to be understood as relational in the sense that their determinacy is relative to an observer. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Numerical Study on Entropy Generation in Thermal Convection with Differentially Discrete Heat Boundary Conditions
Entropy 2018, 20(5), 351; https://doi.org/10.3390/e20050351
Received: 3 March 2018 / Revised: 17 April 2018 / Accepted: 1 May 2018 / Published: 8 May 2018
PDF Full-text (6284 KB) | HTML Full-text | XML Full-text
Abstract
Entropy generation in thermal convection with differentially discrete heat boundary conditions at various Rayleigh numbers (Ra) are numerically investigated using the lattice Boltzmann method. We mainly focused on the effects of Ra and discrete heat boundary conditions on entropy generation in
[...] Read more.
Entropy generation in thermal convection with differentially discrete heat boundary conditions at various Rayleigh numbers (Ra) are numerically investigated using the lattice Boltzmann method. We mainly focused on the effects of Ra and discrete heat boundary conditions on entropy generation in thermal convection according to the minimal entropy generation principle. The results showed that the presence of the discrete heat source at the bottom boundary promotes the transition to a substantial convection, and the viscous entropy generation rate (Su) generally increases in magnitude at the central region of the channel with increasing Ra. Total entropy generation rate (S) and thermal entropy generation rate (Sθ) are larger in magnitude in the region with the largest temperature gradient in the channel. Our results also indicated that the thermal entropy generation, viscous entropy generation, and total entropy generation increase exponentially with the increase of Rayleigh number. It is noted that lower percentage of single heat source area in the bottom boundary increases the intensities of viscous entropy generation, thermal entropy generation and total entropy generation. Comparing with the classical homogeneous thermal convection, the thermal entropy generation, viscous entropy generation, and total entropy generation are improved by the presence of discrete heat sources at the bottom boundary. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Figures

Figure 1

Open AccessArticle Exponential Strong Converse for Source Coding with Side Information at the Decoder
Entropy 2018, 20(5), 352; https://doi.org/10.3390/e20050352
Received: 31 January 2018 / Revised: 18 April 2018 / Accepted: 20 April 2018 / Published: 8 May 2018
PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region
[...] Read more.
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region indicating the trade-off between a data compression rate R and a prescribed distortion level Δ was determined by Wyner and Ziv. In this paper, we study the error probability of decoding for pairs of ( R , Δ ) outside the rate distortion region. We evaluate the probability of decoding such that the estimation of source outputs by the decoder has a distortion not exceeding a prescribed distortion level Δ . We prove that, when ( R , Δ ) is outside the rate distortion region, this probability goes to zero exponentially and derive an explicit lower bound of this exponent function. On the Wyner–Ziv source coding problem the strong converse coding theorem has not been established yet. We prove this as a simple corollary of our result. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Figures

Figure 1

Open AccessArticle Quantum Trajectories: Real or Surreal?
Entropy 2018, 20(5), 353; https://doi.org/10.3390/e20050353
Received: 8 April 2018 / Revised: 27 April 2018 / Accepted: 2 May 2018 / Published: 8 May 2018
PDF Full-text (8649 KB) | HTML Full-text | XML Full-text
Abstract
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context
[...] Read more.
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Entropic Uncertainty Relations for Successive Measurements in the Presence of a Minimal Length
Entropy 2018, 20(5), 354; https://doi.org/10.3390/e20050354
Received: 30 March 2018 / Revised: 1 May 2018 / Accepted: 7 May 2018 / Published: 9 May 2018
PDF Full-text (307 KB) | HTML Full-text | XML Full-text
Abstract
We address the generalized uncertainty principle in scenarios of successive measurements. Uncertainties are characterized by means of generalized entropies of both the Rényi and Tsallis types. Here, specific features of measurements of observables with continuous spectra should be taken into account. First, we
[...] Read more.
We address the generalized uncertainty principle in scenarios of successive measurements. Uncertainties are characterized by means of generalized entropies of both the Rényi and Tsallis types. Here, specific features of measurements of observables with continuous spectra should be taken into account. First, we formulated uncertainty relations in terms of Shannon entropies. Since such relations involve a state-dependent correction term, they generally differ from preparation uncertainty relations. This difference is revealed when the position is measured by the first. In contrast, state-independent uncertainty relations in terms of Rényi and Tsallis entropies are obtained with the same lower bounds as in the preparation scenario. These bounds are explicitly dependent on the acceptance function of apparatuses in momentum measurements. Entropic uncertainty relations with binning are discussed as well. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Open AccessArticle Al-Ti-Containing Lightweight High-Entropy Alloys for Intermediate Temperature Applications
Entropy 2018, 20(5), 355; https://doi.org/10.3390/e20050355
Received: 19 April 2018 / Revised: 8 May 2018 / Accepted: 8 May 2018 / Published: 9 May 2018
PDF Full-text (2114 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this study, new high-entropy alloys (HEAs), which contain lightweight elements, namely Al and Ti, have been designed for intermediate temperature applications. Cr, Mo, and V were selected as the elements for the Al-Ti-containing HEAs by elemental screening using their binary phase diagrams.
[...] Read more.
In this study, new high-entropy alloys (HEAs), which contain lightweight elements, namely Al and Ti, have been designed for intermediate temperature applications. Cr, Mo, and V were selected as the elements for the Al-Ti-containing HEAs by elemental screening using their binary phase diagrams. AlCrMoTi and AlCrMoTiV HEAs are confirmed as solid solutions with minor ordered B2 phases and have superb specific hardness when compared to that of commercial alloys. The present work demonstrates the desirable possibility for substitution of traditional materials that are applied at intermediate temperature to Al-Ti-containing lightweight HEAs. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Experimental Non-Violation of the Bell Inequality
Entropy 2018, 20(5), 356; https://doi.org/10.3390/e20050356
Received: 7 April 2018 / Revised: 24 April 2018 / Accepted: 2 May 2018 / Published: 10 May 2018
PDF Full-text (646 KB) | HTML Full-text | XML Full-text
Abstract
A finite non-classical framework for qubit physics is described that challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the primacy of a fractal-like ‘invariant set’ geometry IU in cosmological state
[...] Read more.
A finite non-classical framework for qubit physics is described that challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the primacy of a fractal-like ‘invariant set’ geometry I U in cosmological state space, on which the universe evolves deterministically and causally, and from which space-time and the laws of physics in space-time are emergent. Consistent with the assumed primacy of I U , a non-Euclidean (and hence non-classical) metric g p is defined in cosmological state space. Here, p is a large but finite integer (whose inverse may reflect the weakness of gravity). Points that do not lie on I U are necessarily g p -distant from points that do. g p is related to the p-adic metric of number theory. Using number-theoretic properties of spherical triangles, the Clauser-Horne-Shimony-Holt (CHSH) inequality, whose violation would rule out local realism, is shown to be undefined in this framework. Moreover, the CHSH-like inequalities violated experimentally are shown to be g p -distant from the CHSH inequality. This result fails in the singular limit p = , at which g p is Euclidean and the corresponding model classical. Although Invariant Set Theory is deterministic and locally causal, it is not conspiratorial and does not compromise experimenter free will. The relationship between Invariant Set Theory, Bohmian Theory, The Cellular Automaton Interpretation of Quantum Theory and p-adic Quantum Theory is discussed. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Exponential Entropy for Simplified Neutrosophic Sets and Its Application in Decision Making
Entropy 2018, 20(5), 357; https://doi.org/10.3390/e20050357
Received: 19 April 2018 / Revised: 3 May 2018 / Accepted: 10 May 2018 / Published: 10 May 2018
PDF Full-text (293 KB) | HTML Full-text | XML Full-text
Abstract
Entropy is one of many important mathematical tools for measuring uncertain/fuzzy information. As a subclass of neutrosophic sets (NSs), simplified NSs (including single-valued and interval-valued NSs) can describe incomplete, indeterminate, and inconsistent information. Based on the concept of fuzzy exponential entropy for fuzzy
[...] Read more.
Entropy is one of many important mathematical tools for measuring uncertain/fuzzy information. As a subclass of neutrosophic sets (NSs), simplified NSs (including single-valued and interval-valued NSs) can describe incomplete, indeterminate, and inconsistent information. Based on the concept of fuzzy exponential entropy for fuzzy sets, this work proposes exponential entropy measures of simplified NSs (named simplified neutrosophic exponential entropy (SNEE) measures), including single-valued and interval-valued neutrosophic exponential entropy measures, and investigates their properties. Then, the proposed exponential entropy measures of simplified NSs are compared with existing related entropy measures of interval-valued NSs to illustrate the rationality and effectiveness of the proposed SNEE measures through a numerical example. Finally, the developed exponential entropy measures for simplified NSs are applied to a multi-attribute decision-making example in an interval-valued NS setting to demonstrate the application of the proposed SNEE measures. However, the SNEE measures not only enrich the theory of simplified neutrosophic entropy, but also provide a novel way of measuring uncertain information in a simplified NS setting. Full article
Open AccessArticle Agents, Subsystems, and the Conservation of Information
Entropy 2018, 20(5), 358; https://doi.org/10.3390/e20050358
Received: 12 March 2018 / Revised: 26 April 2018 / Accepted: 5 May 2018 / Published: 10 May 2018
PDF Full-text (580 KB) | HTML Full-text | XML Full-text
Abstract
Dividing the world into subsystems is an important component of the scientific method. The choice of subsystems, however, is not defined a priori. Typically, it is dictated by experimental capabilities, which may be different for different agents. Here, we propose a way
[...] Read more.
Dividing the world into subsystems is an important component of the scientific method. The choice of subsystems, however, is not defined a priori. Typically, it is dictated by experimental capabilities, which may be different for different agents. Here, we propose a way to define subsystems in general physical theories, including theories beyond quantum and classical mechanics. Our construction associates every agent A with a subsystem S A , equipped with its set of states and its set of transformations. In quantum theory, this construction accommodates the notion of subsystems as factors of a tensor product, as well as the notion of subsystems associated with a subalgebra of operators. Classical systems can be interpreted as subsystems of quantum systems in different ways, by applying our construction to agents who have access to different sets of operations, including multiphase covariant channels and certain sets of free operations arising in the resource theory of quantum coherence. After illustrating the basic definitions, we restrict our attention to closed systems, that is, systems where all physical transformations act invertibly and where all states can be generated from a fixed initial state. For closed systems, we show that all the states of all subsystems admit a canonical purification. This result extends the purification principle to a broader setting, in which coherent superpositions can be interpreted as purifications of incoherent mixtures. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Water Resources Carrying Capacity Evaluation and Diagnosis Based on Set Pair Analysis and Improved the Entropy Weight Method
Entropy 2018, 20(5), 359; https://doi.org/10.3390/e20050359
Received: 10 April 2018 / Revised: 7 May 2018 / Accepted: 9 May 2018 / Published: 11 May 2018
PDF Full-text (1785 KB) | HTML Full-text | XML Full-text
Abstract
To quantitatively evaluate and diagnose the carrying capacity of regional water resources under uncertain conditions, an index system and corresponding grade criteria were constructed from the perspective of carrying subsystem. Meanwhile, an improved entropy weight method was used to determine the objective weight
[...] Read more.
To quantitatively evaluate and diagnose the carrying capacity of regional water resources under uncertain conditions, an index system and corresponding grade criteria were constructed from the perspective of carrying subsystem. Meanwhile, an improved entropy weight method was used to determine the objective weight of the index. Then, an evaluation model was built by applying set pair analysis, and a set pair potential based on subtraction was proposed to identify the carrying vulnerability factors. Finally, an empirical study was carried out in Anhui Province. The results showed that the consistency among objective weights of each index was considered, and the uncertainty between the index and grade criterion was reasonably dealt with. Furthermore, although the carrying situation in Anhui was severe, the development tended to be improved. The status in Southern Anhui was superior to that in the middle area, and that in the northern part was relatively grim. In addition, for Northern Anhui, the fewer water resources chiefly caused its long-term overloaded status. The improvement of capacity in the middle area was mainly hindered by its deficient ecological water consumption and limited water-saving irrigation area. Moreover, the long-term loadable condition in the southern part was due largely to its relatively abundant water resources and small population size. This evaluation and diagnosis method can be widely applied to carrying issues in other resources and environment fields. Full article
(This article belongs to the Special Issue Applications of Information Theory in the Geosciences II)
Figures

Figure 1

Open AccessArticle Multiscale Distribution Entropy and t-Distributed Stochastic Neighbor Embedding-Based Fault Diagnosis of Rolling Bearings
Entropy 2018, 20(5), 360; https://doi.org/10.3390/e20050360
Received: 6 April 2018 / Revised: 24 April 2018 / Accepted: 24 April 2018 / Published: 11 May 2018
PDF Full-text (3555 KB) | HTML Full-text | XML Full-text
Abstract
As a nonlinear dynamic method for complexity measurement of time series, multiscale entropy (MSE) has been successfully applied to fault diagnosis of rolling bearings. However, the MSE algorithm is sensitive to the predetermined parameters and depends heavily on the length of the time
[...] Read more.
As a nonlinear dynamic method for complexity measurement of time series, multiscale entropy (MSE) has been successfully applied to fault diagnosis of rolling bearings. However, the MSE algorithm is sensitive to the predetermined parameters and depends heavily on the length of the time series and MSE may yield an inaccurate estimation of entropy or undefined entropy when the length of time series is too short. To improve the robustness of complexity measurement for short time series, a novel nonlinear parameter named multiscale distribution entropy (MDE) was proposed and employed to extract the nonlinear complexity features from vibration signals of rolling bearing in this paper. Combining with t-distributed stochastic neighbor embedding (t-SNE) for feature dimension reduction and Kriging-variable predictive models based class discrimination (KVPMCD) for automatic identification, a new intelligent fault diagnosis method for rolling bearings was proposed. Finally, the proposed approach was applied to analyze the experimental data of rolling bearings and the results indicated that the proposed method could distinguish the different fault categories of rolling bearings effectively. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle A Hybrid De-Noising Algorithm for the Gear Transmission System Based on CEEMDAN-PE-TFPF
Entropy 2018, 20(5), 361; https://doi.org/10.3390/e20050361
Received: 6 April 2018 / Revised: 10 May 2018 / Accepted: 10 May 2018 / Published: 11 May 2018
PDF Full-text (8044 KB) | HTML Full-text | XML Full-text
Abstract
In order to remove noise and preserve the important features of a signal, a hybrid de-noising algorithm based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Permutation Entropy (PE), and Time-Frequency Peak Filtering (TFPF) is proposed. In view of the limitations
[...] Read more.
In order to remove noise and preserve the important features of a signal, a hybrid de-noising algorithm based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Permutation Entropy (PE), and Time-Frequency Peak Filtering (TFPF) is proposed. In view of the limitations of the conventional TFPF method regarding the fixed window length problem, CEEMDAN and PE are applied to compensate for this, so that the signal is balanced with respect to both noise suppression and signal fidelity. First, the Intrinsic Mode Functions (IMFs) of the original spectra are obtained using the CEEMDAN algorithm, and the PE value of each IMF is calculated to classify whether the IMF requires filtering, then, for different IMFs, we select different window lengths to filter them using TFPF; finally, the signal is reconstructed as the sum of the filtered and residual IMFs. The filtering results of a simulated and an actual gearbox vibration signal verify that the de-noising results of CEEMDAN-PE-TFPF outperforms other signal de-noising methods, and the proposed method can reveal fault characteristic information effectively. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Thermodynamics at Solid–Liquid Interfaces
Entropy 2018, 20(5), 362; https://doi.org/10.3390/e20050362
Received: 3 April 2018 / Revised: 26 April 2018 / Accepted: 9 May 2018 / Published: 12 May 2018
PDF Full-text (422 KB) | HTML Full-text | XML Full-text
Abstract
The variation of the liquid properties in the vicinity of a solid surface complicates the description of heat transfer along solid–liquid interfaces. Using Molecular Dynamics simulations, this investigation aims to understand how the material properties, particularly the strength of the solid–liquid interaction, affect
[...] Read more.
The variation of the liquid properties in the vicinity of a solid surface complicates the description of heat transfer along solid–liquid interfaces. Using Molecular Dynamics simulations, this investigation aims to understand how the material properties, particularly the strength of the solid–liquid interaction, affect the thermal conductivity of the liquid at the interface. The molecular model consists of liquid argon confined by two parallel, smooth, solid walls, separated by a distance of 6.58 σ. We find that the component of the thermal conductivity parallel to the surface increases with the affinity of the solid and liquid. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Figures

Figure 1

Open AccessArticle Quantifying the Effects of Topology and Weight for Link Prediction in Weighted Complex Networks
Entropy 2018, 20(5), 363; https://doi.org/10.3390/e20050363
Received: 6 April 2018 / Revised: 10 May 2018 / Accepted: 10 May 2018 / Published: 13 May 2018
PDF Full-text (447 KB) | HTML Full-text | XML Full-text
Abstract
In weighted networks, both link weight and topological structure are significant characteristics for link prediction. In this study, a general framework combining null models is proposed to quantify the impact of the topology, weight correlation and statistics on link prediction in weighted networks.
[...] Read more.
In weighted networks, both link weight and topological structure are significant characteristics for link prediction. In this study, a general framework combining null models is proposed to quantify the impact of the topology, weight correlation and statistics on link prediction in weighted networks. Three null models for topology and weight distribution of weighted networks are presented. All the links of the original network can be divided into strong and weak ties. We can use null models to verify the strong effect of weak or strong ties. For two important statistics, we construct two null models to measure their impacts on link prediction. In our experiments, the proposed method is applied to seven empirical networks, which demonstrates that this model is universal and the impact of the topology and weight distribution of these networks in link prediction can be quantified by it. We find that in the USAir, the Celegans, the Gemo, the Lesmis and the CatCortex, the strong ties are easier to predict, but there are a few networks whose weak edges can be predicted more easily, such as the Netscience and the CScientists. It is also found that the weak ties contribute more to link prediction in the USAir, the NetScience and the CScientists, that is, the strong effect of weak ties exists in these networks. The framework we proposed is versatile, which is not only used to link prediction but also applicable to other directions in complex networks. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Fault Diagnosis of Gearboxes Using Nonlinearity and Determinism by Generalized Hurst Exponents of Shuffle and Surrogate Data
Entropy 2018, 20(5), 364; https://doi.org/10.3390/e20050364
Received: 14 April 2018 / Revised: 10 May 2018 / Accepted: 11 May 2018 / Published: 14 May 2018
PDF Full-text (10219 KB) | HTML Full-text | XML Full-text
Abstract
Vibrations of defective gearboxes show great complexities. Therefore, dynamics and noise levels of vibrations of gearboxes vary with operation of gearboxes. As a result, nonlinearity and determinism of data can serve to describe running conditions of gearboxes. However, measuring of nonlinearity and determinism
[...] Read more.
Vibrations of defective gearboxes show great complexities. Therefore, dynamics and noise levels of vibrations of gearboxes vary with operation of gearboxes. As a result, nonlinearity and determinism of data can serve to describe running conditions of gearboxes. However, measuring of nonlinearity and determinism of data is challenging. This paper defines a two-dimensional measure for simultaneously quantifying nonlinearity and determinism of data by comparing generalized Hurst exponents of original, shuffle and surrogate data. Afterwards, this paper proposes a novel method for fault diagnosis of gearboxes using the two-dimensional measure. Robustness of the proposed method was validated numerically by analyzing simulative signals with different noise levels. Moreover, the performance of the proposed method was benchmarked against Approximate Entropy, Sample Entropy, Permutation Entropy and Delay Vector Variance by conducting two independent gearbox experiments. The results show that the proposed method achieves superiority over the others in fault diagnosis of gearboxes. Full article
Figures

Figure 1

Open AccessArticle Analysis of Chaotic Behavior in a Novel Extended Love Model Considering Positive and Negative External Environment
Entropy 2018, 20(5), 365; https://doi.org/10.3390/e20050365
Received: 27 March 2018 / Revised: 8 May 2018 / Accepted: 12 May 2018 / Published: 14 May 2018
PDF Full-text (15460 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this study was to describe a novel extended dynamical love model with the external environments of the love story of Romeo and Juliet. We used the sinusoidal function as external environments as it could represent the positive and negative characteristics
[...] Read more.
The aim of this study was to describe a novel extended dynamical love model with the external environments of the love story of Romeo and Juliet. We used the sinusoidal function as external environments as it could represent the positive and negative characteristics of humans. We considered positive and negative advice from a third person. First, we applied the same amount of positive and negative advice. Second, the amount of positive advice was greater than that of negative advice. Third, the amount of positive advice was smaller than that of negative advice in an external environment. To verify the chaotic phenomena in the proposed extended dynamic love affair with external environments, we used time series, phase portraits, power spectrum, Poincare map, bifurcation diagram, and the maximal Lyapunov exponent. With a variation of parameter “a”, we recognized that the novel extended dynamic love affairs with different three situations of external environments had chaotic behaviors. We showed 1, 2, 4 periodic motion, Rössler type attractor, and chaotic attractor when parameter “a” varied under the following conditions: the amount of positive advice = the amount of negative advice, the amount of positive advice > the amount of negative advice, and the amount of positive advice < the amount of negative advice. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessArticle Thermoelectric Efficiency of a Topological Nano-Junction
Entropy 2018, 20(5), 366; https://doi.org/10.3390/e20050366
Received: 24 March 2018 / Revised: 10 May 2018 / Accepted: 11 May 2018 / Published: 14 May 2018
PDF Full-text (1403 KB) | HTML Full-text | XML Full-text
Abstract
We studied the non-equilibrium current, transport coefficients and thermoelectric performance of a nano-junction, composed by a quantum dot connected to a normal superconductor and a topological superconductor leads, respectively. We considered a one-dimensional topological superconductor, which hosts two Majorana fermion states at its
[...] Read more.
We studied the non-equilibrium current, transport coefficients and thermoelectric performance of a nano-junction, composed by a quantum dot connected to a normal superconductor and a topological superconductor leads, respectively. We considered a one-dimensional topological superconductor, which hosts two Majorana fermion states at its edges. Our results show that the electric and thermal currents across the junction are highly mediated by multiple Andreev reflections between the quantum dot and the leads, thus leading to a strong nonlinear dependence of the current on the applied bias voltage. Remarkably, we find that our system reaches a sharp maximum of its thermoelectric efficiency at a finite bias, when an external magnetic field is imposed upon the junction. We propose that this feature can be used for accurate temperature sensing at the nanoscale. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Figures

Figure 1

Open AccessArticle Feynman Paths and Weak Values
Entropy 2018, 20(5), 367; https://doi.org/10.3390/e20050367
Received: 16 April 2018 / Revised: 4 May 2018 / Accepted: 9 May 2018 / Published: 14 May 2018
PDF Full-text (521 KB) | HTML Full-text | XML Full-text
Abstract
There has been a recent revival of interest in the notion of a ‘trajectory’ of a quantum particle. In this paper, we detail the relationship between Dirac’s ideas, Feynman paths and the Bohm approach. The key to the relationship is the weak value
[...] Read more.
There has been a recent revival of interest in the notion of a ‘trajectory’ of a quantum particle. In this paper, we detail the relationship between Dirac’s ideas, Feynman paths and the Bohm approach. The key to the relationship is the weak value of the momentum which Feynman calls a transition probability amplitude. With this identification we are able to conclude that a Bohm ‘trajectory’ is the average of an ensemble of actual individual stochastic Feynman paths. This implies that they can be interpreted as the mean momentum flow of a set of individual quantum processes and not the path of an individual particle. This enables us to give a clearer account of the experimental two-slit results of Kocsis et al. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Writing, Proofreading and Editing in Information Theory
Entropy 2018, 20(5), 368; https://doi.org/10.3390/e20050368
Received: 5 April 2018 / Revised: 4 May 2018 / Accepted: 12 May 2018 / Published: 15 May 2018
PDF Full-text (502 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Information is a physical entity amenable to be described by an abstract theory. The concepts associated with the creation and post-processing of the information have not, however, been mathematically established, despite being broadly used in many fields of knowledge. Here, inspired by how
[...] Read more.
Information is a physical entity amenable to be described by an abstract theory. The concepts associated with the creation and post-processing of the information have not, however, been mathematically established, despite being broadly used in many fields of knowledge. Here, inspired by how information is managed in biomolecular systems, we introduce writing, entailing any bit string generation, and revision, as comprising proofreading and editing, in information chains. Our formalism expands the thermodynamic analysis of stochastic chains made up of material subunits to abstract strings of symbols. We introduce a non-Markovian treatment of operational rules over the symbols of the chain that parallels the physical interactions responsible for memory effects in material chains. Our theory underlies any communication system, ranging from human languages and computer science to gene evolution. Full article
(This article belongs to the Special Issue Thermodynamics of Information Processing)
Figures

Figure 1

Open AccessArticle The Role of Entropy in Estimating Financial Network Default Impact
Entropy 2018, 20(5), 369; https://doi.org/10.3390/e20050369
Received: 25 April 2018 / Revised: 10 May 2018 / Accepted: 15 May 2018 / Published: 16 May 2018
PDF Full-text (2326 KB) | HTML Full-text | XML Full-text
Abstract
Agents in financial networks can simultaneously be both creditors and debtors, creating the possibility that a default may cause a subsequent default cascade. Resolution of unpayable debts in these situations will have a distributional impact. Using a relative entropy-based measure of the distributional
[...] Read more.
Agents in financial networks can simultaneously be both creditors and debtors, creating the possibility that a default may cause a subsequent default cascade. Resolution of unpayable debts in these situations will have a distributional impact. Using a relative entropy-based measure of the distributional impact of the subsequent default resolution process, it is argued that minimum mutual information estimation of unknown cells in the matrix of funds originally owed by the network participants to each other does not introduce systematic biases when estimating that impact. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle A Survey of Viewpoint Selection Methods for Polygonal Models
Entropy 2018, 20(5), 370; https://doi.org/10.3390/e20050370
Received: 24 March 2018 / Revised: 11 May 2018 / Accepted: 11 May 2018 / Published: 16 May 2018
PDF Full-text (30998 KB) | HTML Full-text | XML Full-text
Abstract
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare
[...] Read more.
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare twenty-two measures to select good views of a polygonal 3D model, classify them using an extension of the categories defined by Secord et al., and evaluate them against the Dutagaci et al. benchmark. Eleven of these measures have not been reviewed in previous surveys. Three out of the five short-listed best viewpoint measures are directly related to information. We also present in which fields the different viewpoint measures have been applied. Finally, we provide a publicly available framework where all the viewpoint selection measures are implemented and can be compared against each other. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Figures

Figure 1

Open AccessArticle Normal Laws for Two Entropy Estimators on Infinite Alphabets
Entropy 2018, 20(5), 371; https://doi.org/10.3390/e20050371
Received: 3 April 2018 / Revised: 9 May 2018 / Accepted: 10 May 2018 / Published: 17 May 2018
PDF Full-text (687 KB) | HTML Full-text | XML Full-text
Abstract
This paper offers sufficient conditions for the Miller–Madow estimator and the jackknife estimator of entropy to have respective asymptotic normalities on countably infinite alphabets. Full article
Figures

Figure 1a

Open AccessArticle Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies
Entropy 2018, 20(5), 372; https://doi.org/10.3390/e20050372
Received: 27 January 2018 / Revised: 22 April 2018 / Accepted: 1 May 2018 / Published: 17 May 2018
PDF Full-text (2304 KB) | HTML Full-text | XML Full-text
Abstract
The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of
[...] Read more.
The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy). By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE)). The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE) and standard error) also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers. Full article
Figures

Figure 1

Open AccessArticle An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques
Entropy 2018, 20(5), 373; https://doi.org/10.3390/e20050373
Received: 21 April 2018 / Revised: 12 May 2018 / Accepted: 15 May 2018 / Published: 17 May 2018
PDF Full-text (1525 KB) | HTML Full-text | XML Full-text
Abstract
The topic of big data has attracted increasing interest in recent years. The emergence of big data leads to new difficulties in terms of protection models used for data privacy, which is of necessity for sharing and processing data. Protecting individuals’ sensitive information
[...] Read more.
The topic of big data has attracted increasing interest in recent years. The emergence of big data leads to new difficulties in terms of protection models used for data privacy, which is of necessity for sharing and processing data. Protecting individuals’ sensitive information while maintaining the usability of the data set published is the most important challenge in privacy preserving. In this regard, data anonymization methods are utilized in order to protect data against identity disclosure and linking attacks. In this study, a novel data anonymization algorithm based on chaos and perturbation has been proposed for privacy and utility preserving in big data. The performance of the proposed algorithm is evaluated in terms of Kullback–Leibler divergence, probabilistic anonymity, classification accuracy, F-measure and execution time. The experimental results have shown that the proposed algorithm is efficient and performs better in terms of Kullback–Leibler divergence, classification accuracy and F-measure compared to most of the existing algorithms using the same data set. Resulting from applying chaos to perturb data, such successful algorithm is promising to be used in privacy preserving data mining and data publishing. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Robust Estimation for the Single Index Model Using Pseudodistances
Entropy 2018, 20(5), 374; https://doi.org/10.3390/e20050374
Received: 31 March 2018 / Revised: 11 May 2018 / Accepted: 14 May 2018 / Published: 17 May 2018
PDF Full-text (368 KB) | HTML Full-text | XML Full-text
Abstract
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the
[...] Read more.
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the mean and the covariance matrix of the asset returns, which practically would be unrealistic given the dynamic of market conditions. The traditional way to estimate the regression parameters in the single index model is the maximum likelihood method. Although the maximum likelihood estimators have desirable theoretical properties when the model is exactly satisfied, they may give completely erroneous results when outliers are present in the data set. In this paper, we define minimum pseudodistance estimators for the parameters of the single index model and using them we construct new robust optimal portfolios. We prove theoretical properties of the estimators, such as consistency, asymptotic normality, equivariance, robustness, and illustrate the benefits of the new portfolio optimization method for real financial data. Full article
Figures

Figure 1

Open AccessArticle Transition of Transient Channel Flow with High Reynolds Number Ratios
Entropy 2018, 20(5), 375; https://doi.org/10.3390/e20050375
Received: 24 March 2018 / Revised: 14 May 2018 / Accepted: 15 May 2018 / Published: 17 May 2018
PDF Full-text (9291 KB) | HTML Full-text | XML Full-text
Abstract
Large-eddy simulations of turbulent channel flow subjected to a step-like acceleration have been performed to investigate the effect of high Reynolds number ratios on the transient behaviour of turbulence. It is shown that the response of the flow exhibits the same fundamental characteristics
[...] Read more.
Large-eddy simulations of turbulent channel flow subjected to a step-like acceleration have been performed to investigate the effect of high Reynolds number ratios on the transient behaviour of turbulence. It is shown that the response of the flow exhibits the same fundamental characteristics described in He & Seddighi (J. Fluid Mech., vol. 715, 2013, pp. 60–102 and vol. 764, 2015, pp. 395–427)—a three-stage response resembling that of the bypass transition of boundary layer flows. The features of transition are seen to become more striking as the Re-ratio increases—the elongated streaks become stronger and longer, and the initial turbulent spot sites at the onset of transition become increasingly sparse. The critical Reynolds number of transition and the transition period Reynolds number for those cases are shown to deviate from the trends of He & Seddighi (2015). The high Re-ratio cases show double peaks in the transient response of streamwise fluctuation profiles shortly after the onset of transition. Conditionally-averaged turbulent statistics based on a λ_2-criterion are used to show that the two peaks in the fluctuation profiles are due to separate contributions of the active and inactive regions of turbulence generation. The peak closer to the wall is attributed to the generation of “new” turbulence in the active region, whereas the peak farther away from the wall is attributed to the elongated streaks in the inactive region. In the low Re-ratio cases, the peaks of these two regions are close to each other during the entire transient, resulting in a single peak in the domain-averaged profile. Full article
Figures

Figure 1