Next Issue
Volume 20, June
Previous Issue
Volume 20, April
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 20, Issue 5 (May 2018) – 84 articles

Cover Story (view full-size image): The so-called quantum trajectories arising from the Bohm theory should not be called ``surreal'' as they are averages of Feynman paths used in the standard approach. A numerical investigation of a double Stern–Gerlach experiment shows the coupling between the spin and the center-of-mass motion and the role of the quantum potential within the Bohm formalism. The results reveal that the spin evolves smoothly between the magnets and the screen, contrary to the prediction of the standard quantum theory. View the paper here.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

4 pages, 147 KiB  
Editorial
Editorial: Entropy in Landscape Ecology
by Samuel A. Cushman
Entropy 2018, 20(5), 314; https://doi.org/10.3390/e20050314 - 25 Apr 2018
Cited by 12 | Viewed by 3267
Abstract
Entropy and the second law of thermodynamics are the central organizing principles of nature, but the ideas and implications of the second law are poorly developed in landscape ecology. The purpose of this Special Issue “Entropy in Landscape Ecology” in Entropy is to [...] Read more.
Entropy and the second law of thermodynamics are the central organizing principles of nature, but the ideas and implications of the second law are poorly developed in landscape ecology. The purpose of this Special Issue “Entropy in Landscape Ecology” in Entropy is to bring together current research on applications of thermodynamics in landscape ecology, to consolidate current knowledge and identify key areas for future research. The special issue contains six articles, which cover a broad range of topics including relationships between entropy and evolution, connections between fractal geometry and entropy, new approaches to calculate configurational entropy of landscapes, example analyses of computing entropy of landscapes, and using entropy in the context of optimal landscape planning. Collectively these papers provide a broad range of contributions to the nascent field of ecological thermodynamics. Formalizing the connections between entropy and ecology are in a very early stage, and that this special issue contains papers that address several centrally important ideas, and provides seminal work that will be a foundation for the future development of ecological and evolutionary thermodynamics. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
4 pages, 342 KiB  
Editorial
Entropy in Nanofluids
by Giulio Lorenzini and Omid Mahian
Entropy 2018, 20(5), 339; https://doi.org/10.3390/e20050339 - 3 May 2018
Cited by 5 | Viewed by 2227
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

3 pages, 188 KiB  
Editorial
Molecular Dynamics vs. Stochastic Processes: Are We Heading Anywhere?
by Giovanni Ciccotti, Mauro Ferrario and Christof Schütte
Entropy 2018, 20(5), 348; https://doi.org/10.3390/e20050348 - 7 May 2018
Cited by 3 | Viewed by 3777
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)

Research

Jump to: Editorial, Review, Other

22 pages, 23524 KiB  
Article
Studies on the Exergy Transfer Law for the Irreversible Process in the Waxy Crude Oil Pipeline Transportation
by Qinglin Cheng, Anbo Zheng, Shuang Song, Hao Wu, Lili Lv and Yang Liu
Entropy 2018, 20(5), 309; https://doi.org/10.3390/e20050309 - 24 Apr 2018
Cited by 3 | Viewed by 2978
Abstract
With the increasing demand of oil products in China, the energy consumption of pipeline operation will continue to rise greatly, as well as the cost of oil transportation. In the field of practical engineering, saving energy, reducing energy consumption and adapting to the [...] Read more.
With the increasing demand of oil products in China, the energy consumption of pipeline operation will continue to rise greatly, as well as the cost of oil transportation. In the field of practical engineering, saving energy, reducing energy consumption and adapting to the international oil situation are the development trends and represent difficult problems. Based on the basic principle of non-equilibrium thermodynamics, this paper derives the field equilibrium equations of non-equilibrium thermodynamic process for pipeline transportation. To seek the bilinear form of “force” and “flow” in the non-equilibrium thermodynamics of entropy generation rate, the oil pipeline exergy balance equation and the exergy transfer pipeline dynamic equation of the irreversibility were established. The exergy balance equation was applied to energy balance evaluation system, which makes the system more perfect. The exergy flow transfer law of the waxy oil pipeline were explored deeply from the directions of dynamic exergy, pressure exergy, thermal exergy and diffusion exergy. Taking an oil pipeline as an example, the influence factors of exergy transfer coefficient and exergy flow density were analyzed separately. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

9 pages, 2503 KiB  
Article
Exergy Analyses of Onion Drying by Convection: Influence of Dryer Parameters on Performance
by María Castro, Celia Román, Marcelo Echegaray, Germán Mazza and Rosa Rodriguez
Entropy 2018, 20(5), 310; https://doi.org/10.3390/e20050310 - 25 Apr 2018
Cited by 27 | Viewed by 4753
Abstract
This research work is concerned in the exergy analysis of the continuous-convection drying of onion. The influence of temperature and air velocity was studied in terms of exergy parameters. The energy and exergy balances were carried out taking into account the onion drying [...] Read more.
This research work is concerned in the exergy analysis of the continuous-convection drying of onion. The influence of temperature and air velocity was studied in terms of exergy parameters. The energy and exergy balances were carried out taking into account the onion drying chamber. Its behavior was analyzed based on exergy efficiency, exergy loss rate, exergetic improvement potential rate, and sustainability index. The exergy loss rates increase with the temperature and air velocity augmentation. Exergy loss rate is influenced by the drying air temperatures and velocities because the overall heat transfer coefficient varies with these operation conditions. On the other hand, the exergy efficiency increases with the air velocity augmentation. This behavior is due to the energy utilization was improved because the most amount of supplied energy was utilized for the moisture evaporation. However, the exergy efficiency decreases with the temperature augmentation due to the free moisture being lower, then, the moisture begins diffusing from the internal structure to the surface. The exergetic improvement potential rate values show that the exergy efficiency of onion drying process can be ameliorated. The sustainability index of the drying chamber varied from 1.9 to 5.1. To reduce the process environmental impact, the parameters must be modified in order to ameliorate the exergy efficiency of the process. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Show Figures

Figure 1

15 pages, 18706 KiB  
Article
Network Entropy for the Sequence Analysis of Functional Connectivity Graphs of the Brain
by Chi Zhang, Fengyu Cong, Tuomo Kujala, Wenya Liu, Jia Liu, Tiina Parviainen and Tapani Ristaniemi
Entropy 2018, 20(5), 311; https://doi.org/10.3390/e20050311 - 25 Apr 2018
Cited by 19 | Viewed by 5614
Abstract
Dynamic representation of functional brain networks involved in the sequence analysis of functional connectivity graphs of the brain (FCGB) gains advances in uncovering evolved interaction mechanisms. However, most of the networks, even the event-related ones, are highly heterogeneous due to spurious interactions, which [...] Read more.
Dynamic representation of functional brain networks involved in the sequence analysis of functional connectivity graphs of the brain (FCGB) gains advances in uncovering evolved interaction mechanisms. However, most of the networks, even the event-related ones, are highly heterogeneous due to spurious interactions, which bring challenges to revealing the change patterns of interactive information in the complex dynamic process. In this paper, we propose a network entropy (NE) method to measure connectivity uncertainty of FCGB sequences to alleviate the spurious interaction problem in dynamic network analysis to realize associations with different events during a complex cognitive task. The proposed dynamic analysis approach calculated the adjacency matrices from ongoing electroencephalpgram (EEG) in a sliding time-window to form the FCGB sequences. The probability distribution of Shannon entropy was replaced by the connection sequence distribution to measure the uncertainty of FCGB constituting NE. Without averaging, we used time frequency transform of the NE of FCGB sequences to analyze the event-related changes in oscillatory activity in the single-trial traces during the complex cognitive process of driving. Finally, the results of a verification experiment showed that the NE of the FCGB sequences has a certain time-locked performance for different events related to driver fatigue in a prolonged driving task. The time errors between the extracted time of high-power NE and the recorded time of event occurrence were distributed within the range [−30 s, 30 s] and 90.1% of the time errors were distributed within the range [−10 s, 10 s]. The high correlation (r = 0.99997, p < 0.001) between the timing characteristics of the two types of signals indicates that the NE can reflect the actual dynamic interaction states of brain. Thus, the method may have potential implications for cognitive studies and for the detection of physiological states. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Show Figures

Figure 1

12 pages, 791 KiB  
Article
Password Security as a Game of Entropies
by Stefan Rass and Sandra König
Entropy 2018, 20(5), 312; https://doi.org/10.3390/e20050312 - 25 Apr 2018
Cited by 15 | Viewed by 5907
Abstract
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability [...] Read more.
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability of the password (measured by Shannon entropy), opposed to the difficulty for player 2 of guessing it (measured by min-entropy), and the cognitive efforts of player 1 tied to changing the password (measured by relative entropy, i.e., Kullback–Leibler divergence). The model and contribution are thus twofold: (i) it applies multi-objective game theory to the password security problem; and (ii) it introduces different concepts of entropy to measure the quality of a password choice process under different angles (and not a given password itself, since this cannot be quality-assessed in terms of entropy). We illustrate our approach with an example from everyday life, namely we analyze the password choices of employees. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
18 pages, 4377 KiB  
Article
Thermodynamic and Economic Analysis of an Integrated Solar Combined Cycle System
by Shucheng Wang, Zhongguang Fu, Sajid Sajid, Tianqing Zhang and Gaoqiang Zhang
Entropy 2018, 20(5), 313; https://doi.org/10.3390/e20050313 - 25 Apr 2018
Cited by 10 | Viewed by 4046
Abstract
Integrating solar thermal energy into the conventional Combined Cycle Power Plant (CCPP) has been proved to be an efficient way to use solar energy and improve the generation efficiency of CCPP. In this paper, the energy, exergy, and economic (3E) methods were applied [...] Read more.
Integrating solar thermal energy into the conventional Combined Cycle Power Plant (CCPP) has been proved to be an efficient way to use solar energy and improve the generation efficiency of CCPP. In this paper, the energy, exergy, and economic (3E) methods were applied to the models of the Integrated Solar Combined Cycle System (ISCCS). The performances of the proposed system were not only assessed by energy and exergy efficiency, as well as exergy destruction, but also through varied thermodynamic parameters such as DNI and Ta. Besides, to better understand the real potentials for improving the components, exergy destruction was split into endogenous/exogenous and avoidable/unavoidable parts. Results indicate that the combustion chamber of the gas turbine has the largest endogenous and unavoidable exergy destruction values of 202.23 MW and 197.63 MW, and the values of the parabolic trough solar collector are 51.77 MW and 50.01 MW. For the overall power plant, the exogenous and avoidable exergy destruction rates resulted in 17.61% and 17.78%, respectively. In addition, the proposed system can save a fuel cost of 1.86 $/MW·h per year accompanied by reducing CO2 emissions of about 88.40 kg/MW·h, further highlighting the great potential of ISCCS. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

16 pages, 17206 KiB  
Article
Virtual Network Embedding Based on Graph Entropy
by Jingjing Zhang, Chenggui Zhao, Honggang Wu, Minghui Lin and Ren Duan
Entropy 2018, 20(5), 315; https://doi.org/10.3390/e20050315 - 25 Apr 2018
Cited by 4 | Viewed by 3502
Abstract
For embedding virtual networks into a large scale substrate network, a massive amount of time is needed to search the resource space even if the scale of the virtual network is small. The complexity of searching the candidate resource will be reduced if [...] Read more.
For embedding virtual networks into a large scale substrate network, a massive amount of time is needed to search the resource space even if the scale of the virtual network is small. The complexity of searching the candidate resource will be reduced if candidates in substrate network can be located in a group of particularly matched areas, in which the resource distribution and communication structure of the substrate network exhibit a maximal similarity with the objective virtual network. This work proposes to discover the optimally suitable resource in a substrate network corresponding to the objective virtual network through comparison of their graph entropies. Aiming for this, the substrate network is divided into substructures referring to the importance of nodes in it, and the entropies of these substructures are calculated. The virtual network will be embedded preferentially into the substructure with the closest entropy if the substrate resource satisfies the demand of the virtual network. The experimental results validate that the efficiency of virtual network embedding can be improved through our proposal. Simultaneously, the quality of embedding has been guaranteed without significant degradation. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Show Figures

Figure 1

15 pages, 328 KiB  
Article
Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio
by Yuejiao Fu, Hangjing Wang and Augustine Wong
Entropy 2018, 20(5), 316; https://doi.org/10.3390/e20050316 - 25 Apr 2018
Cited by 1 | Viewed by 3153
Abstract
The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any [...] Read more.
The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981) and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Show Figures

Figure 1

10 pages, 995 KiB  
Article
Divergence from, and Convergence to, Uniformity of Probability Density Quantiles
by Robert G. Staudte and Aihua Xia
Entropy 2018, 20(5), 317; https://doi.org/10.3390/e20050317 - 25 Apr 2018
Cited by 4 | Viewed by 3527
Abstract
We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated [...] Read more.
We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated quantile function. It has earlier been shown that the pdQ is representative of a location-scale family and carries essential information regarding shape and tail behavior of the family. The class of pdQs are densities of continuous distributions with common domain, the unit interval, facilitating metric and semi-metric comparisons. The Kullback–Leibler divergences from uniformity of these pdQs are mapped to illustrate their relative positions with respect to uniformity. To gain more insight into the information that is conserved under the pdQ mapping, we repeatedly apply the pdQ mapping and find that further applications of it are quite generally entropy increasing so convergence to the uniform distribution is investigated. New fixed point theorems are established with elementary probabilistic arguments and illustrated by examples. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Show Figures

Graphical abstract

28 pages, 2989 KiB  
Article
Quantifying Configuration-Sampling Error in Langevin Simulations of Complex Molecular Systems
by Josh Fass, David A. Sivak, Gavin E. Crooks, Kyle A. Beauchamp, Benedict Leimkuhler and John D. Chodera
Entropy 2018, 20(5), 318; https://doi.org/10.3390/e20050318 - 26 Apr 2018
Cited by 23 | Viewed by 6722
Abstract
While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the [...] Read more.
While Langevin integrators are popular in the study of equilibrium properties of complex systems, it is challenging to estimate the timestep-induced discretization error: the degree to which the sampled phase-space or configuration-space probability density departs from the desired target density due to the use of a finite integration timestep. Sivak et al., introduced a convenient approach to approximating a natural measure of error between the sampled density and the target equilibrium density, the Kullback-Leibler (KL) divergence, in phase space, but did not specifically address the issue of configuration-space properties, which are much more commonly of interest in molecular simulations. Here, we introduce a variant of this near-equilibrium estimator capable of measuring the error in the configuration-space marginal density, validating it against a complex but exact nested Monte Carlo estimator to show that it reproduces the KL divergence with high fidelity. To illustrate its utility, we employ this new near-equilibrium estimator to assess a claim that a recently proposed Langevin integrator introduces extremely small configuration-space density errors up to the stability limit at no extra computational expense. Finally, we show how this approach to quantifying sampling bias can be applied to a wide variety of stochastic integrators by following a straightforward procedure to compute the appropriate shadow work, and describe how it can be extended to quantify the error in arbitrary marginal or conditional distributions of interest. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Show Figures

Graphical abstract

13 pages, 1250 KiB  
Article
Distributed One Time Password Infrastructure for Linux Environments
by Alberto Benito Peral, Ana Lucila Sandoval Orozco, Luis Javier García Villalba and Tai-Hoon Kim
Entropy 2018, 20(5), 319; https://doi.org/10.3390/e20050319 - 26 Apr 2018
Cited by 1 | Viewed by 4146
Abstract
Nowadays, there is a lot of critical information and services hosted on computer systems. The proper access control to these resources is essential to avoid malicious actions that could cause huge losses to home and professional users. The access control systems have evolved [...] Read more.
Nowadays, there is a lot of critical information and services hosted on computer systems. The proper access control to these resources is essential to avoid malicious actions that could cause huge losses to home and professional users. The access control systems have evolved from the first password based systems to the modern mechanisms using smart cards, certificates, tokens, biometric systems, etc. However, when designing a system, it is necessary to take into account their particular limitations, such as connectivity, infrastructure or budget. In addition, one of the main objectives must be to ensure the system usability, but this property is usually orthogonal to the security. Thus, the use of password is still common. In this paper, we expose a new password based access control system that aims to improve password security with the minimum impact in the system usability. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Show Figures

Figure 1

14 pages, 2781 KiB  
Article
The Relationship between Postural Stability and Lower-Limb Muscle Activity Using an Entropy-Based Similarity Index
by Chien-Chih Wang, Bernard C. Jiang and Pei-Min Huang
Entropy 2018, 20(5), 320; https://doi.org/10.3390/e20050320 - 26 Apr 2018
Cited by 6 | Viewed by 4168
Abstract
The aim of this study is to see if the centre of pressure (COP) measurements on the postural stability can be used to represent the electromyography (EMG) measurement on the activity data of lower limb muscles. If so, the cost-effective COP data measurements [...] Read more.
The aim of this study is to see if the centre of pressure (COP) measurements on the postural stability can be used to represent the electromyography (EMG) measurement on the activity data of lower limb muscles. If so, the cost-effective COP data measurements can be used to indicate the level of postural stability and lower limb muscle activity. The Hilbert–Huang Transform method was used to analyse the data from the experimental designed to examine the correlation between lower-limb muscles and postural stability. We randomly selected 24 university students to participate in eight scenarios and simultaneously measured their COP and EMG signals during the experiments. The Empirical Mode Decomposition was used to identify the intrinsic-mode functions (IMF) that can distinguish between the COP and EMG at different states. Subsequently, similarity indices and synchronization analyses were used to calculate the correlation between the lower-limb muscle strength and the postural stability. The IMF5 of the COP signals and the IMF6 of the EMG signals were not significantly different and the average frequency was 0.8 Hz, with a range of 0–2 Hz. When the postural stability was poor, the COP and EMG had a high synchronization with index values within the range of 0.010–0.015. With good postural stability, the synchronization indices were between 0.006 and 0.080 and both exhibited low synchronization. The COP signals and the low frequency EMG signals were highly correlated. In conclusion, we demonstrated that the COP may provide enough information on postural stability without the EMG data. Full article
Show Figures

Figure 1

20 pages, 1710 KiB  
Article
Finite Difference Method for Time-Space Fractional Advection–Diffusion Equations with Riesz Derivative
by Sadia Arshad, Dumitru Baleanu, Jianfei Huang, Maysaa Mohamed Al Qurashi, Yifa Tang and Yue Zhao
Entropy 2018, 20(5), 321; https://doi.org/10.3390/e20050321 - 26 Apr 2018
Cited by 28 | Viewed by 4953
Abstract
In this article, a numerical scheme is formulated and analysed to solve the time-space fractional advection–diffusion equation, where the Riesz derivative and the Caputo derivative are considered in spatial and temporal directions, respectively. The Riesz space derivative is approximated by the second-order fractional [...] Read more.
In this article, a numerical scheme is formulated and analysed to solve the time-space fractional advection–diffusion equation, where the Riesz derivative and the Caputo derivative are considered in spatial and temporal directions, respectively. The Riesz space derivative is approximated by the second-order fractional weighted and shifted Grünwald–Letnikov formula. Based on the equivalence between the fractional differential equation and the integral equation, we have transformed the fractional differential equation into an equivalent integral equation. Then, the integral is approximated by the trapezoidal formula. Further, the stability and convergence analysis are discussed rigorously. The resulting scheme is formally proved with the second order accuracy both in space and time. Numerical experiments are also presented to verify the theoretical analysis. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Show Figures

Figure 1

10 pages, 5288 KiB  
Article
A New Two-Dimensional Map with Hidden Attractors
by Chuanfu Wang and Qun Ding
Entropy 2018, 20(5), 322; https://doi.org/10.3390/e20050322 - 27 Apr 2018
Cited by 42 | Viewed by 4685
Abstract
The investigations of hidden attractors are mainly in continuous-time dynamic systems, and there are a few investigations of hidden attractors in discrete-time dynamic systems. The classical chaotic attractors of the Logistic map, Tent map, Henon map, Arnold’s cat map, and other widely-known chaotic [...] Read more.
The investigations of hidden attractors are mainly in continuous-time dynamic systems, and there are a few investigations of hidden attractors in discrete-time dynamic systems. The classical chaotic attractors of the Logistic map, Tent map, Henon map, Arnold’s cat map, and other widely-known chaotic attractors are those excited from unstable fixed points. In this paper, the hidden dynamics of a new two-dimensional map inspired by Arnold’s cat map is investigated, and the existence of fixed points and their stabilities are studied in detail. Full article
Show Figures

Figure 1

12 pages, 1250 KiB  
Article
Transfer Information Energy: A Quantitative Indicator of Information Transfer between Time Series
by Angel Caţaron and Răzvan Andonie
Entropy 2018, 20(5), 323; https://doi.org/10.3390/e20050323 - 27 Apr 2018
Cited by 8 | Viewed by 3534
Abstract
We introduce an information-theoretical approach for analyzing information transfer between time series. Rather than using the Transfer Entropy (TE), we define and apply the Transfer Information Energy (TIE), which is based on Onicescu’s Information Energy. Whereas the TE can be used as a [...] Read more.
We introduce an information-theoretical approach for analyzing information transfer between time series. Rather than using the Transfer Entropy (TE), we define and apply the Transfer Information Energy (TIE), which is based on Onicescu’s Information Energy. Whereas the TE can be used as a measure of the reduction in uncertainty about one time series given another, the TIE may be viewed as a measure of the increase in certainty about one time series given another. We compare the TIE and the TE in two known time series prediction applications. First, we analyze stock market indexes from the Americas, Asia/Pacific and Europe, with the goal to infer the information transfer between them (i.e., how they influence each other). In the second application, we take a bivariate time series of the breath rate and instantaneous heart rate of a sleeping human suffering from sleep apnea, with the goal to determine the information transfer heart → breath vs. breath → heart. In both applications, the computed TE and TIE values are strongly correlated, meaning that the TIE can substitute the TE for such applications, even if they measure symmetric phenomena. The advantage of using the TIE is computational: we can obtain similar results, but faster. Full article
Show Figures

Figure 1

14 pages, 3557 KiB  
Article
Application of Multiscale Entropy in Mechanical Fault Diagnosis of High Voltage Circuit Breaker
by Longjiang Dou, Shuting Wan and Changgeng Zhan
Entropy 2018, 20(5), 325; https://doi.org/10.3390/e20050325 - 28 Apr 2018
Cited by 23 | Viewed by 2887
Abstract
Mechanical fault diagnosis of a circuit breaker can help improve the reliability of power systems. Therefore, a new method based on multiscale entropy (MSE) and the support vector machine (SVM) is proposed to diagnose the fault in high voltage circuit breakers. First, Variational [...] Read more.
Mechanical fault diagnosis of a circuit breaker can help improve the reliability of power systems. Therefore, a new method based on multiscale entropy (MSE) and the support vector machine (SVM) is proposed to diagnose the fault in high voltage circuit breakers. First, Variational Mode Decomposition (VMD) is used to process the high voltage circuit breaker’s vibration signals, and the reconstructed signal can eliminate the effect of noise. Second, the multiscale entropy of the reconstructed signal is calculated and selected as a feature vector. Finally, based on the feature vector, the fault identification and classification are realized by SVM. The feature vector constructed by multiscale entropy is compared with other feature vectors to illustrate the superiority of the proposed method. Through experimentation on a 35 kV SF6 circuit breaker, the feasibility and applicability of the proposed method for fault diagnosis are verified. Full article
Show Figures

Figure 1

22 pages, 7322 KiB  
Article
Quantization and Bifurcation beyond Square-Integrable Wavefunctions
by Ciann-Dong Yang and Chung-Hsuan Kuo
Entropy 2018, 20(5), 327; https://doi.org/10.3390/e20050327 - 29 Apr 2018
Cited by 1 | Viewed by 4395
Abstract
Probability interpretation is the cornerstone of standard quantum mechanics. To ensure the validity of the probability interpretation, wavefunctions have to satisfy the square-integrable (SI) condition, which gives rise to the well-known phenomenon of energy quantization in confined quantum systems. On the other hand, [...] Read more.
Probability interpretation is the cornerstone of standard quantum mechanics. To ensure the validity of the probability interpretation, wavefunctions have to satisfy the square-integrable (SI) condition, which gives rise to the well-known phenomenon of energy quantization in confined quantum systems. On the other hand, nonsquare-integrable (NSI) solutions to the Schrödinger equation are usually ruled out and have long been believed to be irrelevant to energy quantization. This paper proposes a quantum-trajectory approach to energy quantization by releasing the SI condition and considering both SI and NSI solutions to the Schrödinger equation. Contrary to our common belief, we find that both SI and NSI wavefunctions contribute to energy quantization. SI wavefunctions help to locate the bifurcation points at which energy has a step jump, while NSI wavefunctions form the flat parts of the stair-like distribution of the quantized energies. The consideration of NSI wavefunctions furthermore reveals a new quantum phenomenon regarding the synchronicity between the energy quantization process and the center-saddle bifurcation process. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Show Figures

Figure 1

16 pages, 249 KiB  
Article
The Gibbs Paradox: Lessons from Thermodynamics
by Janneke Van Lith
Entropy 2018, 20(5), 328; https://doi.org/10.3390/e20050328 - 30 Apr 2018
Cited by 4 | Viewed by 3861
Abstract
The Gibbs paradox in statistical mechanics is often taken to indicate that already in the classical domain particles should be treated as fundamentally indistinguishable. This paper shows, on the contrary, how one can recover the thermodynamical account of the entropy of mixing, while [...] Read more.
The Gibbs paradox in statistical mechanics is often taken to indicate that already in the classical domain particles should be treated as fundamentally indistinguishable. This paper shows, on the contrary, how one can recover the thermodynamical account of the entropy of mixing, while treating states that only differ by permutations of similar particles as distinct. By reference to the orthodox theory of thermodynamics, it is argued that entropy differences are only meaningful if they are related to reversible processes connecting the initial and final state. For mixing processes, this means that processes should be considered in which particle number is allowed to vary. Within the context of statistical mechanics, the Gibbsian grandcanonical ensemble is a suitable device for describing such processes. It is shown how the grandcanonical entropy relates in the appropriate way to changes of other thermodynamical quantities in reversible processes, and how the thermodynamical account of the entropy of mixing is recovered even when treating the particles as distinguishable. Full article
(This article belongs to the Special Issue Gibbs Paradox 2018)
15 pages, 329 KiB  
Article
Minimum Penalized ϕ-Divergence Estimation under Model Misspecification
by M. Virtudes Alba-Fernández, M. Dolores Jiménez-Gamero and F. Javier Ariza-López
Entropy 2018, 20(5), 329; https://doi.org/10.3390/e20050329 - 30 Apr 2018
Cited by 4 | Viewed by 2768
Abstract
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An [...] Read more.
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An application of the results obtained shows that a parametric bootstrap consistently estimates the null distribution of a certain class of test statistics for model misspecification detection. An illustrative application to the accuracy assessment of the thematic quality in a global land cover map is included. Full article
33 pages, 388 KiB  
Article
Continuity of Channel Parameters and Operations under Various DMC Topologies
by Rajai Nasser
Entropy 2018, 20(5), 330; https://doi.org/10.3390/e20050330 - 30 Apr 2018
Cited by 3 | Viewed by 2402
Abstract
We study the continuity of many channel parameters and operations under various topologies on the space of equivalent discrete memoryless channels (DMC). We show that mutual information, channel capacity, Bhattacharyya parameter, probability of error of a fixed code and optimal probability of error [...] Read more.
We study the continuity of many channel parameters and operations under various topologies on the space of equivalent discrete memoryless channels (DMC). We show that mutual information, channel capacity, Bhattacharyya parameter, probability of error of a fixed code and optimal probability of error for a given code rate and block length are continuous under various DMC topologies. We also show that channel operations such as sums, products, interpolations and Arıkan-style transformations are continuous. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
15 pages, 2890 KiB  
Article
Non-Gaussian Systems Control Performance Assessment Based on Rational Entropy
by Jinglin Zhou, Yiqing Jia, Huixia Jiang and Shuyi Fan
Entropy 2018, 20(5), 331; https://doi.org/10.3390/e20050331 - 1 May 2018
Cited by 9 | Viewed by 2489
Abstract
Control loop Performance Assessment (CPA) plays an important role in system operations. Stochastic statistical CPA index, such as a minimum variance controller (MVC)-based CPA index, is one of the most widely used CPA indices. In this paper, a new minimum entropy controller (MEC)-based [...] Read more.
Control loop Performance Assessment (CPA) plays an important role in system operations. Stochastic statistical CPA index, such as a minimum variance controller (MVC)-based CPA index, is one of the most widely used CPA indices. In this paper, a new minimum entropy controller (MEC)-based CPA method of linear non-Gaussian systems is proposed. In this method, probability density function (PDF) and rational entropy (RE) are respectively used to describe the characteristics and the uncertainty of random variables. To better estimate the performance benchmark, an improved EDA algorithm, which is used to estimate the system parameters and noise PDF, is given. The effectiveness of the proposed method is illustrated through case studies on an ARMAX system. Full article
Show Figures

Figure 1

15 pages, 2619 KiB  
Article
Applying Discrete Homotopy Analysis Method for Solving Fractional Partial Differential Equations
by Figen Özpınar
Entropy 2018, 20(5), 332; https://doi.org/10.3390/e20050332 - 1 May 2018
Cited by 12 | Viewed by 3582
Abstract
In this paper we developed a space discrete version of the homotopy analysis method (DHAM) to find the solutions of linear and nonlinear fractional partial differential equations with time derivative α   ( 0 < α 1 ) . The DHAM contains [...] Read more.
In this paper we developed a space discrete version of the homotopy analysis method (DHAM) to find the solutions of linear and nonlinear fractional partial differential equations with time derivative α   ( 0 < α 1 ) . The DHAM contains the auxiliary parameter , which provides a simple way to guarantee the convergence region of solution series. The efficiency and accuracy of the proposed method is demonstrated by test problems with initial conditions. The results obtained are compared with the exact solutions when α = 1 . It is shown they are in good agreement with each other. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Show Figures

Figure 1

14 pages, 286 KiB  
Article
Principal Curves for Statistical Divergences and an Application to Finance
by Ana Flávia P. Rodrigues and Charles Casimiro Cavalcante
Entropy 2018, 20(5), 333; https://doi.org/10.3390/e20050333 - 2 May 2018
Cited by 3 | Viewed by 2731
Abstract
This paper proposes a method for the beta pricing model under the consideration of non-Gaussian returns by means of a generalization of the mean-variance model and the use of principal curves to define a divergence model for the optimization of the pricing model. [...] Read more.
This paper proposes a method for the beta pricing model under the consideration of non-Gaussian returns by means of a generalization of the mean-variance model and the use of principal curves to define a divergence model for the optimization of the pricing model. We rely on the q-exponential model so consider the properties of the divergences which are used to describe the statistical model and fully characterize the behavior of the assets. We derive the minimum divergence portfolio, which generalizes the Markowitz’s (mean-divergence) approach and relying on the information geometrical aspects of the distributions the Capital Asset Pricing Model (CAPM) is then derived under the geometrical characterization of the distributions which model the data, all by the consideration of principal curves approach. We discuss the possibility of integration of our model into an adaptive procedure that can be used for the search of optimum points on finance applications. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
10 pages, 1517 KiB  
Article
Effective Boundary Slip Induced by Surface Roughness and Their Coupled Effect on Convective Heat Transfer of Liquid Flow
by Yunlu Pan, Dalei Jing, He Zhang and Xuezeng Zhao
Entropy 2018, 20(5), 334; https://doi.org/10.3390/e20050334 - 2 May 2018
Cited by 9 | Viewed by 4398
Abstract
As a significant interfacial property for micro/nano fluidic system, the effective boundary slip can be induced by the surface roughness. However, the effect of surface roughness on the effective slip is still not clear, both increased and decreased effective boundary slip were found [...] Read more.
As a significant interfacial property for micro/nano fluidic system, the effective boundary slip can be induced by the surface roughness. However, the effect of surface roughness on the effective slip is still not clear, both increased and decreased effective boundary slip were found with increased roughness. The present work develops a simplified model to study the effect of surface roughness on the effective boundary slip. In the created rough models, the reference position of the rough surfaces to determinate effective boundary slip was set based on ISO/ASME standard and the surface roughness parameters including Ra (arithmetical mean deviation of the assessed profile), Rsm (mean width of the assessed profile elements) and shape of the texture varied to form different surface roughness. Then, the effective boundary slip of fluid flow through the rough surface was analyzed by using COMSOL 5.3. The results show that the effective boundary slip induced by surface roughness of fully wetted rough surface keeps negative and further decreases with increasing Ra or decreasing Rsm. Different shape of roughness texture also results in different effective slip. A simplified corrected method for the measured effective boundary slip was developed and proved to be efficient when the Rsm is no larger than 200 nm. Another important finding in the present work is that the convective heat transfer firstly increases followed by an unobvious change with increasing Ra, while the effective boundary slip keeps decreasing. It is believed that the increasing Ra enlarges the area of solid-liquid interface for convective heat transfer, however, when Ra is large enough, the decreasing roughness-induced effective boundary slip counteracts the enhancement effect of roughness itself on the convective heat transfer. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Show Figures

Figure 1

10 pages, 256 KiB  
Article
What Constitutes Emergent Quantum Reality? A Complex System Exploration from Entropic Gravity and the Universal Constants
by Arno Keppens
Entropy 2018, 20(5), 335; https://doi.org/10.3390/e20050335 - 2 May 2018
Cited by 4 | Viewed by 5664
Abstract
In this work, it is acknowledged that important attempts to devise an emergent quantum (gravity) theory require space-time to be discretized at the Planck scale. It is therefore conjectured that reality is identical to a sub-quantum dynamics of ontological micro-constituents that are connected [...] Read more.
In this work, it is acknowledged that important attempts to devise an emergent quantum (gravity) theory require space-time to be discretized at the Planck scale. It is therefore conjectured that reality is identical to a sub-quantum dynamics of ontological micro-constituents that are connected by a single interaction law. To arrive at a complex system-based toy-model identification of these micro-constituents, two strategies are combined. First, by seeing gravity as an entropic phenomenon and generalizing the dimensional reduction of the associated holographic principle, the universal constants of free space are related to assumed attributes of the micro-constituents. Second, as the effective field dynamics of the micro-constituents must eventually obey Einstein’s field equations, a sub-quantum interaction law is derived from a solution of these equations. A Planck-scale origin for thermodynamic black hole characteristics and novel views on entropic gravity theory result from this approach, which eventually provides a different view on quantum gravity and its unification with the fundamental forces. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
24 pages, 2678 KiB  
Article
Entropy Generation Analysis and Natural Convection in a Nanofluid-Filled Square Cavity with a Concentric Solid Insert and Different Temperature Distributions
by Ammar I. Alsabery, Muhamad Safwan Ishak, Ali J. Chamkha and Ishak Hashim
Entropy 2018, 20(5), 336; https://doi.org/10.3390/e20050336 - 3 May 2018
Cited by 33 | Viewed by 3789
Abstract
The problem of entropy generation analysis and natural convection in a nanofluid square cavity with a concentric solid insert and different temperature distributions is studied numerically by the finite difference method. An isothermal heater is placed on the bottom wall while isothermal cold [...] Read more.
The problem of entropy generation analysis and natural convection in a nanofluid square cavity with a concentric solid insert and different temperature distributions is studied numerically by the finite difference method. An isothermal heater is placed on the bottom wall while isothermal cold sources are distributed along the top and side walls of the square cavity. The remainder of these walls are kept adiabatic. Water-based nanofluids with Al 2 O 3 nanoparticles are chosen for the investigation. The governing dimensionless parameters of this study are the nanoparticles volume fraction ( 0 ϕ 0.09 ), Rayleigh number ( 10 3 R a 10 6 ) , thermal conductivity ratio ( 0.44 K r 23.8 ) and length of the inner solid ( 0 D 0.7 ). Comparisons with previously experimental and numerical published works verify a very good agreement with the proposed numerical method. Numerical results are presented graphically in the form of streamlines, isotherms and local entropy generation as well as the local and average Nusselt numbers. The obtained results indicate that the thermal conductivity ratio and the inner solid size are excellent control parameters for an optimization of heat transfer and Bejan number within the fully heated and partially cooled square cavity. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

12 pages, 8814 KiB  
Article
Lyapunov Exponents of a Discontinuous 4D Hyperchaotic System of Integer or Fractional Order
by Marius-F. Danca
Entropy 2018, 20(5), 337; https://doi.org/10.3390/e20050337 - 3 May 2018
Cited by 17 | Viewed by 4679
Abstract
In this paper, the dynamics of local finite-time Lyapunov exponents of a 4D hyperchaotic system of integer or fractional order with a discontinuous right-hand side and as an initial value problem, are investigated graphically. It is shown that a discontinuous system of integer [...] Read more.
In this paper, the dynamics of local finite-time Lyapunov exponents of a 4D hyperchaotic system of integer or fractional order with a discontinuous right-hand side and as an initial value problem, are investigated graphically. It is shown that a discontinuous system of integer or fractional order cannot be numerically integrated using methods for continuous differential equations. A possible approach for discontinuous systems is presented. To integrate the initial value problem of fractional order or integer order, the discontinuous system is continuously approximated via Filippov’s regularization and Cellina’s Theorem. The Lyapunov exponents of the approximated system of integer or fractional order are represented as a function of two variables: as a function of two parameters, or as a function of the fractional order and one parameter, respectively. The obtained three-dimensional representation leads to comprehensive conclusions regarding the nature, differences and sign of the Lyapunov exponents in both integer order and fractional order cases. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Show Figures

Figure 1

12 pages, 1448 KiB  
Article
Pressure-Volume Work for Metastable Liquid and Solid at Zero Pressure
by Attila R. Imre, Krzysztof W. Wojciechowski, Gábor Györke, Axel Groniewsky and Jakub. W. Narojczyk
Entropy 2018, 20(5), 338; https://doi.org/10.3390/e20050338 - 3 May 2018
Viewed by 4913
Abstract
Unlike with gases, for liquids and solids the pressure of a system can be not only positive, but also negative, or even zero. Upon isobaric heat exchange (heating or cooling) at p = 0, the volume work (p-V) should be zero, [...] Read more.
Unlike with gases, for liquids and solids the pressure of a system can be not only positive, but also negative, or even zero. Upon isobaric heat exchange (heating or cooling) at p = 0, the volume work (p-V) should be zero, assuming the general validity of traditional δW = dWp = −pdV equality. This means that at zero pressure, a special process can be realized; a macroscopic change of volume achieved by isobaric heating/cooling without any work done by the system on its surroundings or by the surroundings on the system. A neologism is proposed for these dWp = 0 (and in general, also for non-trivial δW = 0 and W = 0) processes: “aergiatic” (from Greek: Ἀεργία, “inactivity”). In this way, two phenomenologically similar processes—adiabatic without any heat exchange, and aergiatic without any work—would have matching, but well-distinguishable terms. Full article
Show Figures

Figure 1

19 pages, 638 KiB  
Article
Secure and Reliable Key Agreement with Physical Unclonable Functions
by Onur Günlü, Tasnad Kernetzky, Onurcan İşcan, Vladimir Sidorenko, Gerhard Kramer and Rafael F. Schaefer
Entropy 2018, 20(5), 340; https://doi.org/10.3390/e20050340 - 3 May 2018
Cited by 30 | Viewed by 5092
Abstract
Different transforms used in binding a secret key to correlated physical-identifier outputs are compared. Decorrelation efficiency is the metric used to determine transforms that give highly-uncorrelated outputs. Scalar quantizers are applied to transform outputs to extract uniformly distributed bit sequences to which secret [...] Read more.
Different transforms used in binding a secret key to correlated physical-identifier outputs are compared. Decorrelation efficiency is the metric used to determine transforms that give highly-uncorrelated outputs. Scalar quantizers are applied to transform outputs to extract uniformly distributed bit sequences to which secret keys are bound. A set of transforms that perform well in terms of the decorrelation efficiency is applied to ring oscillator (RO) outputs to improve the uniqueness and reliability of extracted bit sequences, to reduce the hardware area and information leakage about the key and RO outputs, and to maximize the secret-key length. Low-complexity error-correction codes are proposed to illustrate two complete key-binding systems with perfect secrecy, and better secret-key and privacy-leakage rates than existing methods. A reference hardware implementation is also provided to demonstrate that the transform-coding approach occupies a small hardware area. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Show Figures

Figure 1

23 pages, 929 KiB  
Article
M-SAC-VLADNet: A Multi-Path Deep Feature Coding Model for Visual Classification
by Boheng Chen, Jie Li, Gang Wei and Biyun Ma
Entropy 2018, 20(5), 341; https://doi.org/10.3390/e20050341 - 4 May 2018
Cited by 1 | Viewed by 2930
Abstract
Vector of locally aggregated descriptor (VLAD) coding has become an efficient feature coding model for retrieval and classification. In some recent works, the VLAD coding method is extended to a deep feature coding model which is called NetVLAD. NetVLAD improves significantly over the [...] Read more.
Vector of locally aggregated descriptor (VLAD) coding has become an efficient feature coding model for retrieval and classification. In some recent works, the VLAD coding method is extended to a deep feature coding model which is called NetVLAD. NetVLAD improves significantly over the original VLAD method. Although the NetVLAD model has shown its potential for retrieval and classification, the discriminative ability is not fully researched. In this paper, we propose a new end-to-end feature coding network which is more discriminative than the NetVLAD model. First, we propose a sparsely-adaptive and covariance VLAD model. Next, we derive the back propagation models of all the proposed layers and extend the proposed feature coding model to an end-to-end neural network. Finally, we construct a multi-path feature coding network which aggregates multiple newly-designed feature coding networks for visual classification. Some experimental results show that our feature coding network is very effective for visual classification. Full article
Show Figures

Figure 1

30 pages, 3093 KiB  
Article
The Agent-Based Model and Simulation of Sexual Selection and Pair Formation Mechanisms
by Rafał Dreżewski
Entropy 2018, 20(5), 342; https://doi.org/10.3390/e20050342 - 4 May 2018
Cited by 4 | Viewed by 3083
Abstract
In this paper, the agent-based simulation model of sexual selection and pair formation mechanisms is proposed. Sexual selection is a mechanism that occurs when the numbers of individuals of both sexes are almost identical, while reproduction costs for one of the sexes are [...] Read more.
In this paper, the agent-based simulation model of sexual selection and pair formation mechanisms is proposed. Sexual selection is a mechanism that occurs when the numbers of individuals of both sexes are almost identical, while reproduction costs for one of the sexes are much higher. The mechanism of creating pairs allows individuals to form stable, reproducing pairs. Simulation experiments carried out using the proposed agent-based model, and several fitness landscapes were aimed at verifying whether sexual selection and the mechanism of pair formation can trigger sympatric speciation and whether they can promote and maintain population diversity. Experiments were mainly focused on the mechanism of pair formation and its impact on speciation and population diversity. Results of conducted experiments show that sexual selection can start speciation processes and maintain the population diversity. The mechanism of creating pairs, when it occurs along with sexual selection, has a significant impact on the course of speciation and maintenance of population diversity. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

45 pages, 517 KiB  
Article
Topological Structures on DMC Spaces
by Rajai Nasser
Entropy 2018, 20(5), 343; https://doi.org/10.3390/e20050343 - 4 May 2018
Cited by 3 | Viewed by 2182
Abstract
Two channels are said to be equivalent if they are degraded from each other. The space of equivalent channels with input alphabet X and output alphabet Y can be naturally endowed with the quotient of the Euclidean topology by the equivalence relation. A [...] Read more.
Two channels are said to be equivalent if they are degraded from each other. The space of equivalent channels with input alphabet X and output alphabet Y can be naturally endowed with the quotient of the Euclidean topology by the equivalence relation. A topology on the space of equivalent channels with fixed input alphabet X and arbitrary but finite output alphabet is said to be natural if and only if it induces the quotient topology on the subspaces of equivalent channels sharing the same output alphabet. We show that every natural topology is σ -compact, separable and path-connected. The finest natural topology, which we call the strong topology, is shown to be compactly generated, sequential and T 4 . On the other hand, the strong topology is not first-countable anywhere, hence it is not metrizable. We introduce a metric distance on the space of equivalent channels which compares the noise levels between channels. The induced metric topology, which we call the noisiness topology, is shown to be natural. We also study topologies that are inherited from the space of meta-probability measures by identifying channels with their Blackwell measures. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
10 pages, 3500 KiB  
Article
A New Local Fractional Entropy-Based Model for Kidney MRI Image Enhancement
by Ala’a R. Al-Shamasneh, Hamid A. Jalab, Shivakumara Palaiahnakote, Unaizah Hanum Obaidellah, Rabha W. Ibrahim and Moumen T. El-Melegy
Entropy 2018, 20(5), 344; https://doi.org/10.3390/e20050344 - 5 May 2018
Cited by 34 | Viewed by 4901
Abstract
Kidney image enhancement is challenging due to the unpredictable quality of MRI images, as well as the nature of kidney diseases. The focus of this work is on kidney images enhancement by proposing a new Local Fractional Entropy (LFE)-based model. The proposed model [...] Read more.
Kidney image enhancement is challenging due to the unpredictable quality of MRI images, as well as the nature of kidney diseases. The focus of this work is on kidney images enhancement by proposing a new Local Fractional Entropy (LFE)-based model. The proposed model estimates the probability of pixels that represent edges based on the entropy of the neighboring pixels, which results in local fractional entropy. When there is a small change in the intensity values (indicating the presence of edge in the image), the local fractional entropy gives fine image details. Similarly, when no change in intensity values is present (indicating smooth texture), the LFE does not provide fine details, based on the fact that there is no edge information. Tests were conducted on a large dataset of different, poor-quality kidney images to show that the proposed model is useful and effective. A comparative study with the classical methods, coupled with the latest enhancement methods, shows that the proposed model outperforms the existing methods. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Show Figures

Figure 1

9 pages, 3839 KiB  
Article
Remote Sensing Extraction Method of Tailings Ponds in Ultra-Low-Grade Iron Mining Area Based on Spectral Characteristics and Texture Entropy
by Baodong Ma, Yuteng Chen, Song Zhang and Xuexin Li
Entropy 2018, 20(5), 345; https://doi.org/10.3390/e20050345 - 6 May 2018
Cited by 12 | Viewed by 3903
Abstract
With the rapid development of the steel and iron industry, ultra-low-grade iron ore has been developed extensively since the beginning of this century in China. Due to the high concentration ratio of the iron ore, a large amount of tailings was produced and [...] Read more.
With the rapid development of the steel and iron industry, ultra-low-grade iron ore has been developed extensively since the beginning of this century in China. Due to the high concentration ratio of the iron ore, a large amount of tailings was produced and many tailings ponds were established in the mining area. This poses a great threat to regional safety and the environment because of dam breaks and metal pollution. The spatial distribution is the basic information for monitoring the status of tailings ponds. Taking Changhe Mining Area as an example, tailings ponds were extracted by using Landsat 8 OLI images based on both spectral and texture characteristics. Firstly, ultra-low-grade iron-related objects (i.e., tailings and iron ore) were extracted by the Ultra-low-grade Iron-related Objects Index (ULIOI) with a threshold. Secondly, the tailings pond was distinguished from the stope due to their entropy difference in the panchromatic image at a 7 × 7 window size. This remote sensing method could be beneficial to safety and environmental management in the mining area. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Show Figures

Figure 1

11 pages, 378 KiB  
Article
Time-Fractional Diffusion with Mass Absorption in a Half-Line Domain due to Boundary Value of Concentration Varying Harmonically in Time
by Yuriy Povstenko and Tamara Kyrylych
Entropy 2018, 20(5), 346; https://doi.org/10.3390/e20050346 - 6 May 2018
Cited by 8 | Viewed by 2998
Abstract
The time-fractional diffusion equation with mass absorption is studied in a half-line domain under the Dirichlet boundary condition varying harmonically in time. The Caputo derivative is employed. The solution is obtained using the Laplace transform with respect to time and the sin-Fourier transform [...] Read more.
The time-fractional diffusion equation with mass absorption is studied in a half-line domain under the Dirichlet boundary condition varying harmonically in time. The Caputo derivative is employed. The solution is obtained using the Laplace transform with respect to time and the sin-Fourier transform with respect to the spatial coordinate. The results of numerical calculations are illustrated graphically. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Show Figures

Figure 1

32 pages, 449 KiB  
Article
A Generalized Relative (α, β)-Entropy: Geometric Properties and Applications to Robust Statistical Inference
by Abhik Ghosh and Ayanendranath Basu
Entropy 2018, 20(5), 347; https://doi.org/10.3390/e20050347 - 6 May 2018
Cited by 5 | Viewed by 3713
Abstract
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are [...] Read more.
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are popular among statisticians as many of the corresponding minimum divergence methods lead to robust inference in the presence of outliers in the observed data; examples include the ϕ -divergence, the density power divergence, the logarithmic density power divergence and the recently developed family of logarithmic super divergence (LSD). In this paper, we will present an alternative information theoretic formulation of the LSD measures as a two-parameter generalization of the relative α -entropy, which we refer to as the general ( α , β ) -entropy. We explore its relation with various other entropies and divergences, which also generates a two-parameter extension of Renyi entropy measure as a by-product. This paper is primarily focused on the geometric properties of the relative ( α , β ) -entropy or the LSD measures; we prove their continuity and convexity in both the arguments along with an extended Pythagorean relation under a power-transformation of the domain space. We also derive a set of sufficient conditions under which the forward and the reverse projections of the relative ( α , β ) -entropy exist and are unique. Finally, we briefly discuss the potential applications of the relative ( α , β ) -entropy or the LSD measures in statistical inference, in particular, for robust parameter estimation and hypothesis testing. Our results on the reverse projection of the relative ( α , β ) -entropy establish, for the first time, the existence and uniqueness of the minimum LSD estimators. Numerical illustrations are also provided for the problem of estimating the binomial parameter. Full article
19 pages, 1078 KiB  
Article
Robot Evaluation and Selection with Entropy-Based Combination Weighting and Cloud TODIM Approach
by Jing-Jing Wang, Zhong-Hua Miao, Feng-Bao Cui and Hu-Chen Liu
Entropy 2018, 20(5), 349; https://doi.org/10.3390/e20050349 - 7 May 2018
Cited by 39 | Viewed by 4591
Abstract
Nowadays robots have been commonly adopted in various manufacturing industries to improve product quality and productivity. The selection of the best robot to suit a specific production setting is a difficult decision making task for manufacturers because of the increase in complexity and [...] Read more.
Nowadays robots have been commonly adopted in various manufacturing industries to improve product quality and productivity. The selection of the best robot to suit a specific production setting is a difficult decision making task for manufacturers because of the increase in complexity and number of robot systems. In this paper, we explore two key issues of robot evaluation and selection: the representation of decision makers’ diversified assessments and the determination of the ranking of available robots. Specifically, a decision support model which utilizes cloud model and TODIM (an acronym in Portuguese of interactive and multiple criteria decision making) method is developed for the purpose of handling robot selection problems with hesitant linguistic information. Besides, we use an entropy-based combination weighting technique to estimate the weights of evaluation criteria. Finally, we illustrate the proposed cloud TODIM approach with a robot selection example for an automobile manufacturer, and further validate its effectiveness and benefits via a comparative analysis. The results show that the proposed robot selection model has some unique advantages, which is more realistic and flexible for robot selection under a complex and uncertain environment. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

10 pages, 449 KiB  
Article
A No-Go Theorem for Observer-Independent Facts
by Časlav Brukner
Entropy 2018, 20(5), 350; https://doi.org/10.3390/e20050350 - 8 May 2018
Cited by 91 | Viewed by 23180
Abstract
In his famous thought experiment, Wigner assigns an entangled state to the composite quantum system made up of Wigner’s friend and her observed system. While the two of them have different accounts of the process, each Wigner and his friend can in principle [...] Read more.
In his famous thought experiment, Wigner assigns an entangled state to the composite quantum system made up of Wigner’s friend and her observed system. While the two of them have different accounts of the process, each Wigner and his friend can in principle verify his/her respective state assignments by performing an appropriate measurement. As manifested through a click in a detector or a specific position of the pointer, the outcomes of these measurements can be regarded as reflecting directly observable “facts”. Reviewing arXiv:1507.05255, I will derive a no-go theorem for observer-independent facts, which would be common both for Wigner and the friend. I will then analyze this result in the context of a newly-derived theorem arXiv:1604.07422, where Frauchiger and Renner prove that “single-world interpretations of quantum theory cannot be self-consistent”. It is argued that “self-consistency” has the same implications as the assumption that observational statements of different observers can be compared in a single (and hence an observer-independent) theoretical framework. The latter, however, may not be possible, if the statements are to be understood as relational in the sense that their determinacy is relative to an observer. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Show Figures

Figure 1

14 pages, 6284 KiB  
Article
Numerical Study on Entropy Generation in Thermal Convection with Differentially Discrete Heat Boundary Conditions
by Zhengdao Wang, Yikun Wei and Yuehong Qian
Entropy 2018, 20(5), 351; https://doi.org/10.3390/e20050351 - 8 May 2018
Cited by 13 | Viewed by 3584
Abstract
Entropy generation in thermal convection with differentially discrete heat boundary conditions at various Rayleigh numbers (Ra) are numerically investigated using the lattice Boltzmann method. We mainly focused on the effects of Ra and discrete heat boundary conditions on entropy generation in [...] Read more.
Entropy generation in thermal convection with differentially discrete heat boundary conditions at various Rayleigh numbers (Ra) are numerically investigated using the lattice Boltzmann method. We mainly focused on the effects of Ra and discrete heat boundary conditions on entropy generation in thermal convection according to the minimal entropy generation principle. The results showed that the presence of the discrete heat source at the bottom boundary promotes the transition to a substantial convection, and the viscous entropy generation rate (Su) generally increases in magnitude at the central region of the channel with increasing Ra. Total entropy generation rate (S) and thermal entropy generation rate (Sθ) are larger in magnitude in the region with the largest temperature gradient in the channel. Our results also indicated that the thermal entropy generation, viscous entropy generation, and total entropy generation increase exponentially with the increase of Rayleigh number. It is noted that lower percentage of single heat source area in the bottom boundary increases the intensities of viscous entropy generation, thermal entropy generation and total entropy generation. Comparing with the classical homogeneous thermal convection, the thermal entropy generation, viscous entropy generation, and total entropy generation are improved by the presence of discrete heat sources at the bottom boundary. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

32 pages, 361 KiB  
Article
Exponential Strong Converse for Source Coding with Side Information at the Decoder
by Yasutada Oohama
Entropy 2018, 20(5), 352; https://doi.org/10.3390/e20050352 - 8 May 2018
Cited by 20 | Viewed by 2743
Abstract
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region [...] Read more.
We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region indicating the trade-off between a data compression rate R and a prescribed distortion level Δ was determined by Wyner and Ziv. In this paper, we study the error probability of decoding for pairs of ( R , Δ ) outside the rate distortion region. We evaluate the probability of decoding such that the estimation of source outputs by the decoder has a distortion not exceeding a prescribed distortion level Δ . We prove that, when ( R , Δ ) is outside the rate distortion region, this probability goes to zero exponentially and derive an explicit lower bound of this exponent function. On the Wyner–Ziv source coding problem the strong converse coding theorem has not been established yet. We prove this as a simple corollary of our result. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

18 pages, 8649 KiB  
Article
Quantum Trajectories: Real or Surreal?
by Basil J. Hiley and Peter Van Reeth
Entropy 2018, 20(5), 353; https://doi.org/10.3390/e20050353 - 8 May 2018
Cited by 11 | Viewed by 7030
Abstract
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context [...] Read more.
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

15 pages, 307 KiB  
Article
Entropic Uncertainty Relations for Successive Measurements in the Presence of a Minimal Length
by Alexey E. Rastegin
Entropy 2018, 20(5), 354; https://doi.org/10.3390/e20050354 - 9 May 2018
Cited by 6 | Viewed by 3038
Abstract
We address the generalized uncertainty principle in scenarios of successive measurements. Uncertainties are characterized by means of generalized entropies of both the Rényi and Tsallis types. Here, specific features of measurements of observables with continuous spectra should be taken into account. First, we [...] Read more.
We address the generalized uncertainty principle in scenarios of successive measurements. Uncertainties are characterized by means of generalized entropies of both the Rényi and Tsallis types. Here, specific features of measurements of observables with continuous spectra should be taken into account. First, we formulated uncertainty relations in terms of Shannon entropies. Since such relations involve a state-dependent correction term, they generally differ from preparation uncertainty relations. This difference is revealed when the position is measured by the first. In contrast, state-independent uncertainty relations in terms of Rényi and Tsallis entropies are obtained with the same lower bounds as in the preparation scenario. These bounds are explicitly dependent on the acceptance function of apparatuses in momentum measurements. Entropic uncertainty relations with binning are discussed as well. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
9 pages, 2114 KiB  
Article
Al-Ti-Containing Lightweight High-Entropy Alloys for Intermediate Temperature Applications
by Minju Kang, Ka Ram Lim, Jong Woo Won, Kwang Seok Lee and Young Sang Na
Entropy 2018, 20(5), 355; https://doi.org/10.3390/e20050355 - 9 May 2018
Cited by 26 | Viewed by 6299
Abstract
In this study, new high-entropy alloys (HEAs), which contain lightweight elements, namely Al and Ti, have been designed for intermediate temperature applications. Cr, Mo, and V were selected as the elements for the Al-Ti-containing HEAs by elemental screening using their binary phase diagrams. [...] Read more.
In this study, new high-entropy alloys (HEAs), which contain lightweight elements, namely Al and Ti, have been designed for intermediate temperature applications. Cr, Mo, and V were selected as the elements for the Al-Ti-containing HEAs by elemental screening using their binary phase diagrams. AlCrMoTi and AlCrMoTiV HEAs are confirmed as solid solutions with minor ordered B2 phases and have superb specific hardness when compared to that of commercial alloys. The present work demonstrates the desirable possibility for substitution of traditional materials that are applied at intermediate temperature to Al-Ti-containing lightweight HEAs. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Figure 1

14 pages, 646 KiB  
Article
Experimental Non-Violation of the Bell Inequality
by T. N. Palmer
Entropy 2018, 20(5), 356; https://doi.org/10.3390/e20050356 - 10 May 2018
Cited by 5 | Viewed by 4870
Abstract
A finite non-classical framework for qubit physics is described that challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the primacy of a fractal-like ‘invariant set’ geometry I U in cosmological state [...] Read more.
A finite non-classical framework for qubit physics is described that challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the primacy of a fractal-like ‘invariant set’ geometry I U in cosmological state space, on which the universe evolves deterministically and causally, and from which space-time and the laws of physics in space-time are emergent. Consistent with the assumed primacy of I U , a non-Euclidean (and hence non-classical) metric g p is defined in cosmological state space. Here, p is a large but finite integer (whose inverse may reflect the weakness of gravity). Points that do not lie on I U are necessarily g p -distant from points that do. g p is related to the p-adic metric of number theory. Using number-theoretic properties of spherical triangles, the Clauser-Horne-Shimony-Holt (CHSH) inequality, whose violation would rule out local realism, is shown to be undefined in this framework. Moreover, the CHSH-like inequalities violated experimentally are shown to be g p -distant from the CHSH inequality. This result fails in the singular limit p = , at which g p is Euclidean and the corresponding model classical. Although Invariant Set Theory is deterministic and locally causal, it is not conspiratorial and does not compromise experimenter free will. The relationship between Invariant Set Theory, Bohmian Theory, The Cellular Automaton Interpretation of Quantum Theory and p-adic Quantum Theory is discussed. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

10 pages, 293 KiB  
Article
Exponential Entropy for Simplified Neutrosophic Sets and Its Application in Decision Making
by Jun Ye and Wenhua Cui
Entropy 2018, 20(5), 357; https://doi.org/10.3390/e20050357 - 10 May 2018
Cited by 21 | Viewed by 3145
Abstract
Entropy is one of many important mathematical tools for measuring uncertain/fuzzy information. As a subclass of neutrosophic sets (NSs), simplified NSs (including single-valued and interval-valued NSs) can describe incomplete, indeterminate, and inconsistent information. Based on the concept of fuzzy exponential entropy for fuzzy [...] Read more.
Entropy is one of many important mathematical tools for measuring uncertain/fuzzy information. As a subclass of neutrosophic sets (NSs), simplified NSs (including single-valued and interval-valued NSs) can describe incomplete, indeterminate, and inconsistent information. Based on the concept of fuzzy exponential entropy for fuzzy sets, this work proposes exponential entropy measures of simplified NSs (named simplified neutrosophic exponential entropy (SNEE) measures), including single-valued and interval-valued neutrosophic exponential entropy measures, and investigates their properties. Then, the proposed exponential entropy measures of simplified NSs are compared with existing related entropy measures of interval-valued NSs to illustrate the rationality and effectiveness of the proposed SNEE measures through a numerical example. Finally, the developed exponential entropy measures for simplified NSs are applied to a multi-attribute decision-making example in an interval-valued NS setting to demonstrate the application of the proposed SNEE measures. However, the SNEE measures not only enrich the theory of simplified neutrosophic entropy, but also provide a novel way of measuring uncertain information in a simplified NS setting. Full article
54 pages, 580 KiB  
Article
Agents, Subsystems, and the Conservation of Information
by Giulio Chiribella
Entropy 2018, 20(5), 358; https://doi.org/10.3390/e20050358 - 10 May 2018
Cited by 15 | Viewed by 4156
Abstract
Dividing the world into subsystems is an important component of the scientific method. The choice of subsystems, however, is not defined a priori. Typically, it is dictated by experimental capabilities, which may be different for different agents. Here, we propose a way [...] Read more.
Dividing the world into subsystems is an important component of the scientific method. The choice of subsystems, however, is not defined a priori. Typically, it is dictated by experimental capabilities, which may be different for different agents. Here, we propose a way to define subsystems in general physical theories, including theories beyond quantum and classical mechanics. Our construction associates every agent A with a subsystem S A , equipped with its set of states and its set of transformations. In quantum theory, this construction accommodates the notion of subsystems as factors of a tensor product, as well as the notion of subsystems associated with a subalgebra of operators. Classical systems can be interpreted as subsystems of quantum systems in different ways, by applying our construction to agents who have access to different sets of operations, including multiphase covariant channels and certain sets of free operations arising in the resource theory of quantum coherence. After illustrating the basic definitions, we restrict our attention to closed systems, that is, systems where all physical transformations act invertibly and where all states can be generated from a fixed initial state. For closed systems, we show that all the states of all subsystems admit a canonical purification. This result extends the purification principle to a broader setting, in which coherent superpositions can be interpreted as purifications of incoherent mixtures. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
20 pages, 1785 KiB  
Article
Water Resources Carrying Capacity Evaluation and Diagnosis Based on Set Pair Analysis and Improved the Entropy Weight Method
by Yi Cui, Ping Feng, Juliang Jin and Li Liu
Entropy 2018, 20(5), 359; https://doi.org/10.3390/e20050359 - 11 May 2018
Cited by 112 | Viewed by 6225
Abstract
To quantitatively evaluate and diagnose the carrying capacity of regional water resources under uncertain conditions, an index system and corresponding grade criteria were constructed from the perspective of carrying subsystem. Meanwhile, an improved entropy weight method was used to determine the objective weight [...] Read more.
To quantitatively evaluate and diagnose the carrying capacity of regional water resources under uncertain conditions, an index system and corresponding grade criteria were constructed from the perspective of carrying subsystem. Meanwhile, an improved entropy weight method was used to determine the objective weight of the index. Then, an evaluation model was built by applying set pair analysis, and a set pair potential based on subtraction was proposed to identify the carrying vulnerability factors. Finally, an empirical study was carried out in Anhui Province. The results showed that the consistency among objective weights of each index was considered, and the uncertainty between the index and grade criterion was reasonably dealt with. Furthermore, although the carrying situation in Anhui was severe, the development tended to be improved. The status in Southern Anhui was superior to that in the middle area, and that in the northern part was relatively grim. In addition, for Northern Anhui, the fewer water resources chiefly caused its long-term overloaded status. The improvement of capacity in the middle area was mainly hindered by its deficient ecological water consumption and limited water-saving irrigation area. Moreover, the long-term loadable condition in the southern part was due largely to its relatively abundant water resources and small population size. This evaluation and diagnosis method can be widely applied to carrying issues in other resources and environment fields. Full article
(This article belongs to the Special Issue Applications of Information Theory in the Geosciences II)
Show Figures

Figure 1

18 pages, 3555 KiB  
Article
Multiscale Distribution Entropy and t-Distributed Stochastic Neighbor Embedding-Based Fault Diagnosis of Rolling Bearings
by Deyu Tu, Jinde Zheng, Zhanwei Jiang and Haiyang Pan
Entropy 2018, 20(5), 360; https://doi.org/10.3390/e20050360 - 11 May 2018
Cited by 12 | Viewed by 3374
Abstract
As a nonlinear dynamic method for complexity measurement of time series, multiscale entropy (MSE) has been successfully applied to fault diagnosis of rolling bearings. However, the MSE algorithm is sensitive to the predetermined parameters and depends heavily on the length of the time [...] Read more.
As a nonlinear dynamic method for complexity measurement of time series, multiscale entropy (MSE) has been successfully applied to fault diagnosis of rolling bearings. However, the MSE algorithm is sensitive to the predetermined parameters and depends heavily on the length of the time series and MSE may yield an inaccurate estimation of entropy or undefined entropy when the length of time series is too short. To improve the robustness of complexity measurement for short time series, a novel nonlinear parameter named multiscale distribution entropy (MDE) was proposed and employed to extract the nonlinear complexity features from vibration signals of rolling bearing in this paper. Combining with t-distributed stochastic neighbor embedding (t-SNE) for feature dimension reduction and Kriging-variable predictive models based class discrimination (KVPMCD) for automatic identification, a new intelligent fault diagnosis method for rolling bearings was proposed. Finally, the proposed approach was applied to analyze the experimental data of rolling bearings and the results indicated that the proposed method could distinguish the different fault categories of rolling bearings effectively. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

16 pages, 8044 KiB  
Article
A Hybrid De-Noising Algorithm for the Gear Transmission System Based on CEEMDAN-PE-TFPF
by Lili Bai, Zhennan Han, Yanfeng Li and Shaohui Ning
Entropy 2018, 20(5), 361; https://doi.org/10.3390/e20050361 - 11 May 2018
Cited by 30 | Viewed by 3302
Abstract
In order to remove noise and preserve the important features of a signal, a hybrid de-noising algorithm based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Permutation Entropy (PE), and Time-Frequency Peak Filtering (TFPF) is proposed. In view of the limitations [...] Read more.
In order to remove noise and preserve the important features of a signal, a hybrid de-noising algorithm based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Permutation Entropy (PE), and Time-Frequency Peak Filtering (TFPF) is proposed. In view of the limitations of the conventional TFPF method regarding the fixed window length problem, CEEMDAN and PE are applied to compensate for this, so that the signal is balanced with respect to both noise suppression and signal fidelity. First, the Intrinsic Mode Functions (IMFs) of the original spectra are obtained using the CEEMDAN algorithm, and the PE value of each IMF is calculated to classify whether the IMF requires filtering, then, for different IMFs, we select different window lengths to filter them using TFPF; finally, the signal is reconstructed as the sum of the filtered and residual IMFs. The filtering results of a simulated and an actual gearbox vibration signal verify that the de-noising results of CEEMDAN-PE-TFPF outperforms other signal de-noising methods, and the proposed method can reveal fault characteristic information effectively. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

11 pages, 422 KiB  
Article
Thermodynamics at Solid–Liquid Interfaces
by Michael Frank and Dimitris Drikakis
Entropy 2018, 20(5), 362; https://doi.org/10.3390/e20050362 - 12 May 2018
Cited by 23 | Viewed by 5517
Abstract
The variation of the liquid properties in the vicinity of a solid surface complicates the description of heat transfer along solid–liquid interfaces. Using Molecular Dynamics simulations, this investigation aims to understand how the material properties, particularly the strength of the solid–liquid interaction, affect [...] Read more.
The variation of the liquid properties in the vicinity of a solid surface complicates the description of heat transfer along solid–liquid interfaces. Using Molecular Dynamics simulations, this investigation aims to understand how the material properties, particularly the strength of the solid–liquid interaction, affect the thermal conductivity of the liquid at the interface. The molecular model consists of liquid argon confined by two parallel, smooth, solid walls, separated by a distance of 6.58 σ. We find that the component of the thermal conductivity parallel to the surface increases with the affinity of the solid and liquid. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

17 pages, 447 KiB  
Article
Quantifying the Effects of Topology and Weight for Link Prediction in Weighted Complex Networks
by Bo Liu, Shuang Xu, Ting Li, Jing Xiao and Xiao-Ke Xu
Entropy 2018, 20(5), 363; https://doi.org/10.3390/e20050363 - 13 May 2018
Cited by 20 | Viewed by 4265
Abstract
In weighted networks, both link weight and topological structure are significant characteristics for link prediction. In this study, a general framework combining null models is proposed to quantify the impact of the topology, weight correlation and statistics on link prediction in weighted networks. [...] Read more.
In weighted networks, both link weight and topological structure are significant characteristics for link prediction. In this study, a general framework combining null models is proposed to quantify the impact of the topology, weight correlation and statistics on link prediction in weighted networks. Three null models for topology and weight distribution of weighted networks are presented. All the links of the original network can be divided into strong and weak ties. We can use null models to verify the strong effect of weak or strong ties. For two important statistics, we construct two null models to measure their impacts on link prediction. In our experiments, the proposed method is applied to seven empirical networks, which demonstrates that this model is universal and the impact of the topology and weight distribution of these networks in link prediction can be quantified by it. We find that in the USAir, the Celegans, the Gemo, the Lesmis and the CatCortex, the strong ties are easier to predict, but there are a few networks whose weak edges can be predicted more easily, such as the Netscience and the CScientists. It is also found that the weak ties contribute more to link prediction in the USAir, the NetScience and the CScientists, that is, the strong effect of weak ties exists in these networks. The framework we proposed is versatile, which is not only used to link prediction but also applicable to other directions in complex networks. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Show Figures

Figure 1

13 pages, 10219 KiB  
Article
Fault Diagnosis of Gearboxes Using Nonlinearity and Determinism by Generalized Hurst Exponents of Shuffle and Surrogate Data
by Chunhong Dou, Xueye Wei and Jinshan Lin
Entropy 2018, 20(5), 364; https://doi.org/10.3390/e20050364 - 14 May 2018
Cited by 10 | Viewed by 2973
Abstract
Vibrations of defective gearboxes show great complexities. Therefore, dynamics and noise levels of vibrations of gearboxes vary with operation of gearboxes. As a result, nonlinearity and determinism of data can serve to describe running conditions of gearboxes. However, measuring of nonlinearity and determinism [...] Read more.
Vibrations of defective gearboxes show great complexities. Therefore, dynamics and noise levels of vibrations of gearboxes vary with operation of gearboxes. As a result, nonlinearity and determinism of data can serve to describe running conditions of gearboxes. However, measuring of nonlinearity and determinism of data is challenging. This paper defines a two-dimensional measure for simultaneously quantifying nonlinearity and determinism of data by comparing generalized Hurst exponents of original, shuffle and surrogate data. Afterwards, this paper proposes a novel method for fault diagnosis of gearboxes using the two-dimensional measure. Robustness of the proposed method was validated numerically by analyzing simulative signals with different noise levels. Moreover, the performance of the proposed method was benchmarked against Approximate Entropy, Sample Entropy, Permutation Entropy and Delay Vector Variance by conducting two independent gearbox experiments. The results show that the proposed method achieves superiority over the others in fault diagnosis of gearboxes. Full article
Show Figures

Figure 1

21 pages, 15460 KiB  
Article
Analysis of Chaotic Behavior in a Novel Extended Love Model Considering Positive and Negative External Environment
by Linyun Huang and Youngchul Bae
Entropy 2018, 20(5), 365; https://doi.org/10.3390/e20050365 - 14 May 2018
Cited by 8 | Viewed by 4749
Abstract
The aim of this study was to describe a novel extended dynamical love model with the external environments of the love story of Romeo and Juliet. We used the sinusoidal function as external environments as it could represent the positive and negative characteristics [...] Read more.
The aim of this study was to describe a novel extended dynamical love model with the external environments of the love story of Romeo and Juliet. We used the sinusoidal function as external environments as it could represent the positive and negative characteristics of humans. We considered positive and negative advice from a third person. First, we applied the same amount of positive and negative advice. Second, the amount of positive advice was greater than that of negative advice. Third, the amount of positive advice was smaller than that of negative advice in an external environment. To verify the chaotic phenomena in the proposed extended dynamic love affair with external environments, we used time series, phase portraits, power spectrum, Poincare map, bifurcation diagram, and the maximal Lyapunov exponent. With a variation of parameter “a”, we recognized that the novel extended dynamic love affairs with different three situations of external environments had chaotic behaviors. We showed 1, 2, 4 periodic motion, Rössler type attractor, and chaotic attractor when parameter “a” varied under the following conditions: the amount of positive advice = the amount of negative advice, the amount of positive advice > the amount of negative advice, and the amount of positive advice < the amount of negative advice. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Show Figures

Figure 1

18 pages, 1403 KiB  
Article
Thermoelectric Efficiency of a Topological Nano-Junction
by Manuel Álamo and Enrique Muñoz
Entropy 2018, 20(5), 366; https://doi.org/10.3390/e20050366 - 14 May 2018
Cited by 2 | Viewed by 3203
Abstract
We studied the non-equilibrium current, transport coefficients and thermoelectric performance of a nano-junction, composed by a quantum dot connected to a normal superconductor and a topological superconductor leads, respectively. We considered a one-dimensional topological superconductor, which hosts two Majorana fermion states at its [...] Read more.
We studied the non-equilibrium current, transport coefficients and thermoelectric performance of a nano-junction, composed by a quantum dot connected to a normal superconductor and a topological superconductor leads, respectively. We considered a one-dimensional topological superconductor, which hosts two Majorana fermion states at its edges. Our results show that the electric and thermal currents across the junction are highly mediated by multiple Andreev reflections between the quantum dot and the leads, thus leading to a strong nonlinear dependence of the current on the applied bias voltage. Remarkably, we find that our system reaches a sharp maximum of its thermoelectric efficiency at a finite bias, when an external magnetic field is imposed upon the junction. We propose that this feature can be used for accurate temperature sensing at the nanoscale. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

11 pages, 521 KiB  
Article
Feynman Paths and Weak Values
by Robert Flack and Basil J. Hiley
Entropy 2018, 20(5), 367; https://doi.org/10.3390/e20050367 - 14 May 2018
Cited by 16 | Viewed by 5741
Abstract
There has been a recent revival of interest in the notion of a ‘trajectory’ of a quantum particle. In this paper, we detail the relationship between Dirac’s ideas, Feynman paths and the Bohm approach. The key to the relationship is the weak value [...] Read more.
There has been a recent revival of interest in the notion of a ‘trajectory’ of a quantum particle. In this paper, we detail the relationship between Dirac’s ideas, Feynman paths and the Bohm approach. The key to the relationship is the weak value of the momentum which Feynman calls a transition probability amplitude. With this identification we are able to conclude that a Bohm ‘trajectory’ is the average of an ensemble of actual individual stochastic Feynman paths. This implies that they can be interpreted as the mean momentum flow of a set of individual quantum processes and not the path of an individual particle. This enables us to give a clearer account of the experimental two-slit results of Kocsis et al. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

10 pages, 502 KiB  
Article
Writing, Proofreading and Editing in Information Theory
by J. Ricardo Arias-Gonzalez
Entropy 2018, 20(5), 368; https://doi.org/10.3390/e20050368 - 15 May 2018
Cited by 2 | Viewed by 5262
Abstract
Information is a physical entity amenable to be described by an abstract theory. The concepts associated with the creation and post-processing of the information have not, however, been mathematically established, despite being broadly used in many fields of knowledge. Here, inspired by how [...] Read more.
Information is a physical entity amenable to be described by an abstract theory. The concepts associated with the creation and post-processing of the information have not, however, been mathematically established, despite being broadly used in many fields of knowledge. Here, inspired by how information is managed in biomolecular systems, we introduce writing, entailing any bit string generation, and revision, as comprising proofreading and editing, in information chains. Our formalism expands the thermodynamic analysis of stochastic chains made up of material subunits to abstract strings of symbols. We introduce a non-Markovian treatment of operational rules over the symbols of the chain that parallels the physical interactions responsible for memory effects in material chains. Our theory underlies any communication system, ranging from human languages and computer science to gene evolution. Full article
(This article belongs to the Special Issue Thermodynamics of Information Processing)
Show Figures

Figure 1

9 pages, 2326 KiB  
Article
The Role of Entropy in Estimating Financial Network Default Impact
by Michael Stutzer
Entropy 2018, 20(5), 369; https://doi.org/10.3390/e20050369 - 16 May 2018
Cited by 4 | Viewed by 2754
Abstract
Agents in financial networks can simultaneously be both creditors and debtors, creating the possibility that a default may cause a subsequent default cascade. Resolution of unpayable debts in these situations will have a distributional impact. Using a relative entropy-based measure of the distributional [...] Read more.
Agents in financial networks can simultaneously be both creditors and debtors, creating the possibility that a default may cause a subsequent default cascade. Resolution of unpayable debts in these situations will have a distributional impact. Using a relative entropy-based measure of the distributional impact of the subsequent default resolution process, it is argued that minimum mutual information estimation of unknown cells in the matrix of funds originally owed by the network participants to each other does not introduce systematic biases when estimating that impact. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 30998 KiB  
Article
A Survey of Viewpoint Selection Methods for Polygonal Models
by Xavier Bonaventura, Miquel Feixas, Mateu Sbert, Lewis Chuang and Christian Wallraven
Entropy 2018, 20(5), 370; https://doi.org/10.3390/e20050370 - 16 May 2018
Cited by 29 | Viewed by 6802
Abstract
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare [...] Read more.
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare twenty-two measures to select good views of a polygonal 3D model, classify them using an extension of the categories defined by Secord et al., and evaluate them against the Dutagaci et al. benchmark. Eleven of these measures have not been reviewed in previous surveys. Three out of the five short-listed best viewpoint measures are directly related to information. We also present in which fields the different viewpoint measures have been applied. Finally, we provide a publicly available framework where all the viewpoint selection measures are implemented and can be compared against each other. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

14 pages, 687 KiB  
Article
Normal Laws for Two Entropy Estimators on Infinite Alphabets
by Chen Chen, Michael Grabchak, Ann Stewart, Jialin Zhang and Zhiyi Zhang
Entropy 2018, 20(5), 371; https://doi.org/10.3390/e20050371 - 17 May 2018
Cited by 5 | Viewed by 3029
Abstract
This paper offers sufficient conditions for the Miller–Madow estimator and the jackknife estimator of entropy to have respective asymptotic normalities on countably infinite alphabets. Full article
Show Figures

Figure 1

20 pages, 2304 KiB  
Article
Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies
by Aakanshi Gupta, Bharti Suri, Vijay Kumar, Sanjay Misra, Tomas Blažauskas and Robertas Damaševičius
Entropy 2018, 20(5), 372; https://doi.org/10.3390/e20050372 - 17 May 2018
Cited by 30 | Viewed by 5765
Abstract
The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of [...] Read more.
The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy). By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE)). The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE) and standard error) also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers. Full article
Show Figures

Figure 1

18 pages, 1525 KiB  
Article
An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques
by Can Eyupoglu, Muhammed Ali Aydin, Abdul Halim Zaim and Ahmet Sertbas
Entropy 2018, 20(5), 373; https://doi.org/10.3390/e20050373 - 17 May 2018
Cited by 41 | Viewed by 6862
Abstract
The topic of big data has attracted increasing interest in recent years. The emergence of big data leads to new difficulties in terms of protection models used for data privacy, which is of necessity for sharing and processing data. Protecting individuals’ sensitive information [...] Read more.
The topic of big data has attracted increasing interest in recent years. The emergence of big data leads to new difficulties in terms of protection models used for data privacy, which is of necessity for sharing and processing data. Protecting individuals’ sensitive information while maintaining the usability of the data set published is the most important challenge in privacy preserving. In this regard, data anonymization methods are utilized in order to protect data against identity disclosure and linking attacks. In this study, a novel data anonymization algorithm based on chaos and perturbation has been proposed for privacy and utility preserving in big data. The performance of the proposed algorithm is evaluated in terms of Kullback–Leibler divergence, probabilistic anonymity, classification accuracy, F-measure and execution time. The experimental results have shown that the proposed algorithm is efficient and performs better in terms of Kullback–Leibler divergence, classification accuracy and F-measure compared to most of the existing algorithms using the same data set. Resulting from applying chaos to perturb data, such successful algorithm is promising to be used in privacy preserving data mining and data publishing. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 368 KiB  
Article
Robust Estimation for the Single Index Model Using Pseudodistances
by Aida Toma and Cristinca Fulga
Entropy 2018, 20(5), 374; https://doi.org/10.3390/e20050374 - 17 May 2018
Cited by 4 | Viewed by 3984
Abstract
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the [...] Read more.
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the mean and the covariance matrix of the asset returns, which practically would be unrealistic given the dynamic of market conditions. The traditional way to estimate the regression parameters in the single index model is the maximum likelihood method. Although the maximum likelihood estimators have desirable theoretical properties when the model is exactly satisfied, they may give completely erroneous results when outliers are present in the data set. In this paper, we define minimum pseudodistance estimators for the parameters of the single index model and using them we construct new robust optimal portfolios. We prove theoretical properties of the estimators, such as consistency, asymptotic normality, equivariance, robustness, and illustrate the benefits of the new portfolio optimization method for real financial data. Full article
Show Figures

Figure 1

21 pages, 9291 KiB  
Article
Transition of Transient Channel Flow with High Reynolds Number Ratios
by Akshat Mathur, Mehdi Seddighi and Shuisheng He
Entropy 2018, 20(5), 375; https://doi.org/10.3390/e20050375 - 17 May 2018
Cited by 5 | Viewed by 4309
Abstract
Large-eddy simulations of turbulent channel flow subjected to a step-like acceleration have been performed to investigate the effect of high Reynolds number ratios on the transient behaviour of turbulence. It is shown that the response of the flow exhibits the same fundamental characteristics [...] Read more.
Large-eddy simulations of turbulent channel flow subjected to a step-like acceleration have been performed to investigate the effect of high Reynolds number ratios on the transient behaviour of turbulence. It is shown that the response of the flow exhibits the same fundamental characteristics described in He & Seddighi (J. Fluid Mech., vol. 715, 2013, pp. 60–102 and vol. 764, 2015, pp. 395–427)—a three-stage response resembling that of the bypass transition of boundary layer flows. The features of transition are seen to become more striking as the Re-ratio increases—the elongated streaks become stronger and longer, and the initial turbulent spot sites at the onset of transition become increasingly sparse. The critical Reynolds number of transition and the transition period Reynolds number for those cases are shown to deviate from the trends of He & Seddighi (2015). The high Re-ratio cases show double peaks in the transient response of streamwise fluctuation profiles shortly after the onset of transition. Conditionally-averaged turbulent statistics based on a λ_2-criterion are used to show that the two peaks in the fluctuation profiles are due to separate contributions of the active and inactive regions of turbulence generation. The peak closer to the wall is attributed to the generation of “new” turbulence in the active region, whereas the peak farther away from the wall is attributed to the elongated streaks in the inactive region. In the low Re-ratio cases, the peaks of these two regions are close to each other during the entire transient, resulting in a single peak in the domain-averaged profile. Full article
Show Figures

Figure 1

20 pages, 683 KiB  
Article
Dynamics Analysis of a Nonlinear Stochastic SEIR Epidemic System with Varying Population Size
by Xiaofeng Han, Fei Li and Xinzhu Meng
Entropy 2018, 20(5), 376; https://doi.org/10.3390/e20050376 - 17 May 2018
Cited by 13 | Viewed by 3467
Abstract
This paper considers a stochastic susceptible exposed infectious recovered (SEIR) epidemic model with varying population size and vaccination. We aim to study the global dynamics of the reduced nonlinear stochastic proportional differential system. We first investigate the existence and uniqueness of global positive [...] Read more.
This paper considers a stochastic susceptible exposed infectious recovered (SEIR) epidemic model with varying population size and vaccination. We aim to study the global dynamics of the reduced nonlinear stochastic proportional differential system. We first investigate the existence and uniqueness of global positive solution of the stochastic system. Then the sufficient conditions for the extinction and permanence in mean of the infectious disease are obtained. Furthermore, we prove that the solution of the stochastic system has a unique ergodic stationary distribution under appropriate conditions. Finally, the discussion and numerical simulation are given to demonstrate the obtained results. Full article
(This article belongs to the Special Issue Information Theory and Stochastics for Multiscale Nonlinear Systems)
Show Figures

Figure 1

11 pages, 1489 KiB  
Article
Characterization of the Stroke-Induced Changes in the Variability and Complexity of Handgrip Force
by Pengzhi Zhu, Yuanyu Wu, Jingtao Liang, Yu Ye, Huihua Liu, Tiebin Yan and Rong Song
Entropy 2018, 20(5), 377; https://doi.org/10.3390/e20050377 - 17 May 2018
Cited by 3 | Viewed by 3099
Abstract
Introduction: The variability and complexity of handgrip forces in various modulations were investigated to identify post-stroke changes in force modulation, and extend our understanding of stroke-induced deficits. Methods: Eleven post-stroke subjects and ten age-matched controls performed voluntary grip force control tasks [...] Read more.
Introduction: The variability and complexity of handgrip forces in various modulations were investigated to identify post-stroke changes in force modulation, and extend our understanding of stroke-induced deficits. Methods: Eleven post-stroke subjects and ten age-matched controls performed voluntary grip force control tasks (power-grip tasks) at three contraction levels, and stationary dynamometer holding tasks (stationary holding tasks). Variability and complexity were described with root mean square jerk (RMS-jerk) and fuzzy approximate entropy (fApEn), respectively. Force magnitude, Fugl-Meyer upper extremity assessment and Wolf motor function test were also evaluated. Results: Comparing the affected side with the controls, fApEn was significantly decreased and RMS-jerk increased across the three levels in power-grip tasks, and fApEn was significantly decreased in stationary holding tasks. There were significant strong correlations between RMS-jerk and clinical scales in power-grip tasks. Discussion: Abnormal neuromuscular control, altered mechanical properties, and atrophic motoneurons could be the main causes of the differences in complexity and variability in post-stroke subjects. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

17 pages, 4782 KiB  
Article
Novel Bioinspired Approach Based on Chaotic Dynamics for Robot Patrolling Missions with Adversaries
by Daniel-Ioan Curiac, Ovidiu Banias, Constantin Volosencu and Christian-Daniel Curiac
Entropy 2018, 20(5), 378; https://doi.org/10.3390/e20050378 - 18 May 2018
Cited by 22 | Viewed by 4401
Abstract
Living organisms have developed and optimized ingenious defense strategies based on positional entropy. One of the most significant examples in this respect is known as protean behavior, where a prey animal under threat performs unpredictable zig-zag movements in order to confuse, delay or [...] Read more.
Living organisms have developed and optimized ingenious defense strategies based on positional entropy. One of the most significant examples in this respect is known as protean behavior, where a prey animal under threat performs unpredictable zig-zag movements in order to confuse, delay or escape the predator. This kind of defensive behavior can inspire efficient strategies for patrolling robots evolving in the presence of adversaries. The main goal of our proposed bioinspired method is to implement the protean behavior by altering the reference path of the robot with sudden and erratic direction changes without endangering the robot’s overall mission. By this, a foe intending to target and destroy the mobile robot from a distance has less time for acquiring and retaining the proper sight alignment. The method uses the chaotic dynamics of the 2D Arnold’s cat map as a primary source of positional entropy and transfers this feature to every reference path segment using the kinematic relative motion concept. The effectiveness of this novel biologically inspired method is validated through extensive and realistic simulation case studies. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

22 pages, 29306 KiB  
Article
Flow and Heat Transfer in the Tree-Like Branching Microchannel with/without Dimples
by Linqi Shui, Jianhui Sun, Feng Gao and Chunyan Zhang
Entropy 2018, 20(5), 379; https://doi.org/10.3390/e20050379 - 18 May 2018
Cited by 10 | Viewed by 4447
Abstract
This work displays a numerical and experimental investigation on the flow and heat transfer in tree-like branching microchannels and studies the effects of dimples on the heat transfer enhancement. The numerical approach is certified by a smooth branching microchannel experiment. The verification result [...] Read more.
This work displays a numerical and experimental investigation on the flow and heat transfer in tree-like branching microchannels and studies the effects of dimples on the heat transfer enhancement. The numerical approach is certified by a smooth branching microchannel experiment. The verification result shows that the SSG turbulence model can provide a reasonable prediction. Thus, further research on the convective heat transfer in dimpled branching microchannels is conducted with the SSG turbulence model. The results indicate that the dimples can significantly improve the averaged heat transfer performance of branching microchannels, and the heat transfer increment of the branch segment increases with the increase in the branching level. However, the flow dead zones in some dimples at bifurcations and bends suppress the turbulent flow and heat transfer. Furthermore, the Nu number ratio (Nua/Nus) and thermal enhancement factor (η) both monotonously decrease as the Re number increases, while the friction factor ratio (fa/fs) changes nonlinearly. The entropy generation rates of S ˙ t and S ˙ p in all dimpled cases are lower than those in the smooth case, and the dimpled case with the streamwise spacing to diameter ratio s/D = 3 obtains the lowest value of augmentation entropy generation (Ns) under the high Re number conditions. Nua/Nus, fa/fs, and η decline with the increase in the streamwise spacing to diameter ratio (s/D) from 3 to 9; therefore, the dimpled case with s/D = 3 shows the best overall thermal performance. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Show Figures

Figure 1

28 pages, 1603 KiB  
Article
Adiabatic Quantum Computation Applied to Deep Learning Networks
by Jeremy Liu, Federico M. Spedalieri, Ke-Thia Yao, Thomas E. Potok, Catherine Schuman, Steven Young, Robert Patton, Garrett S. Rose and Gangotree Chamka
Entropy 2018, 20(5), 380; https://doi.org/10.3390/e20050380 - 18 May 2018
Cited by 17 | Viewed by 9827
Abstract
Training deep learning networks is a difficult task due to computational complexity, and this is traditionally handled by simplifying network topology to enable parallel computation on graphical processing units (GPUs). However, the emergence of quantum devices allows reconsideration of complex topologies. We illustrate [...] Read more.
Training deep learning networks is a difficult task due to computational complexity, and this is traditionally handled by simplifying network topology to enable parallel computation on graphical processing units (GPUs). However, the emergence of quantum devices allows reconsideration of complex topologies. We illustrate a particular network topology that can be trained to classify MNIST data (an image dataset of handwritten digits) and neutrino detection data using a restricted form of adiabatic quantum computation known as quantum annealing performed by a D-Wave processor. We provide a brief description of the hardware and how it solves Ising models, how we translate our data into the corresponding Ising models, and how we use available expanded topology options to explore potential performance improvements. Although we focus on the application of quantum annealing in this article, the work discussed here is just one of three approaches we explored as part of a larger project that considers alternative means for training deep learning networks. The other approaches involve using a high performance computing (HPC) environment to automatically find network topologies with good performance and using neuromorphic computing to find a low-power solution for training deep learning networks. Our results show that our quantum approach can find good network parameters in a reasonable time despite increased network topology complexity; that HPC can find good parameters for traditional, simplified network topologies; and that neuromorphic computers can use low power memristive hardware to represent complex topologies and parameters derived from other architecture choices. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Show Figures

Figure 1

17 pages, 512 KiB  
Article
Observables and Unobservables in Quantum Mechanics: How the No-Hidden-Variables Theorems Support the Bohmian Particle Ontology
by Dustin Lazarovici, Andrea Oldofredi and Michael Esfeld
Entropy 2018, 20(5), 381; https://doi.org/10.3390/e20050381 - 18 May 2018
Cited by 14 | Viewed by 4851
Abstract
The paper argues that far from challenging—or even refuting—Bohm’s quantum theory, the no-hidden-variables theorems in fact support the Bohmian ontology for quantum mechanics. The reason is that (i) all measurements come down to position measurements; and (ii) Bohm’s theory provides a clear and [...] Read more.
The paper argues that far from challenging—or even refuting—Bohm’s quantum theory, the no-hidden-variables theorems in fact support the Bohmian ontology for quantum mechanics. The reason is that (i) all measurements come down to position measurements; and (ii) Bohm’s theory provides a clear and coherent explanation of the measurement outcome statistics based on an ontology of particle positions, a law for their evolution and a probability measure linked with that law. What the no-hidden-variables theorems teach us is that (i) one cannot infer the properties that the physical systems possess from observables; and that (ii) measurements, being an interaction like other interactions, change the state of the measured system. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

43 pages, 639 KiB  
Article
A Game-Theoretic Approach to Information-Flow Control via Protocol Composition
by Mário S. Alvim, Konstantinos Chatzikokolakis, Yusuke Kawamoto and Catuscia Palamidessi
Entropy 2018, 20(5), 382; https://doi.org/10.3390/e20050382 - 18 May 2018
Cited by 9 | Viewed by 3895
Abstract
In the inference attacks studied in Quantitative Information Flow (QIF), the attacker typically tries to interfere with the system in the attempt to increase its leakage of secret information. The defender, on the other hand, typically tries to decrease leakage by introducing some [...] Read more.
In the inference attacks studied in Quantitative Information Flow (QIF), the attacker typically tries to interfere with the system in the attempt to increase its leakage of secret information. The defender, on the other hand, typically tries to decrease leakage by introducing some controlled noise. This noise introduction can be modeled as a type of protocol composition, i.e., a probabilistic choice among different protocols, and its effect on the amount of leakage depends heavily on whether or not this choice is visible to the attacker. In this work, we consider operators for modeling visible and hidden choice in protocol composition, and we study their algebraic properties. We then formalize the interplay between defender and attacker in a game-theoretic framework adapted to the specific issues of QIF, where the payoff is information leakage. We consider various kinds of leakage games, depending on whether players act simultaneously or sequentially, and on whether or not the choices of the defender are visible to the attacker. In the case of sequential games, the choice of the second player is generally a function of the choice of the first player, and his/her probabilistic choice can be either over the possible functions (mixed strategy) or it can be on the result of the function (behavioral strategy). We show that when the attacker moves first in a sequential game with a hidden choice, then behavioral strategies are more advantageous for the defender than mixed strategies. This contrasts with the standard game theory, where the two types of strategies are equivalent. Finally, we establish a hierarchy of these games in terms of their information leakage and provide methods for finding optimal strategies (at the points of equilibrium) for both attacker and defender in the various cases. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Show Figures

Figure 1

32 pages, 1210 KiB  
Article
On f-Divergences: Integral Representations, Local Behavior, and Inequalities
by Igal Sason
Entropy 2018, 20(5), 383; https://doi.org/10.3390/e20050383 - 19 May 2018
Cited by 35 | Viewed by 4588
Abstract
This paper is focused on f-divergences, consisting of three main contributions. The first one introduces integral representations of a general f-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of f-divergence [...] Read more.
This paper is focused on f-divergences, consisting of three main contributions. The first one introduces integral representations of a general f-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of f-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of f-divergences. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

23 pages, 14490 KiB  
Article
Chaotic Attractors with Fractional Conformable Derivatives in the Liouville–Caputo Sense and Its Dynamical Behaviors
by Jesús Emmanuel Solís Pérez, José Francisco Gómez-Aguilar, Dumitru Baleanu and Fairouz Tchier
Entropy 2018, 20(5), 384; https://doi.org/10.3390/e20050384 - 20 May 2018
Cited by 42 | Viewed by 5295
Abstract
This paper deals with a numerical simulation of fractional conformable attractors of type Rabinovich–Fabrikant, Thomas’ cyclically symmetric attractor and Newton–Leipnik. Fractional conformable and β -conformable derivatives of Liouville–Caputo type are considered to solve the proposed systems. A numerical method based on the Adams–Moulton [...] Read more.
This paper deals with a numerical simulation of fractional conformable attractors of type Rabinovich–Fabrikant, Thomas’ cyclically symmetric attractor and Newton–Leipnik. Fractional conformable and β -conformable derivatives of Liouville–Caputo type are considered to solve the proposed systems. A numerical method based on the Adams–Moulton algorithm is employed to approximate the numerical simulations of the fractional-order conformable attractors. The results of the new type of fractional conformable and β -conformable attractors are provided to illustrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Show Figures

Figure 1

16 pages, 2526 KiB  
Article
MOTiFS: Monte Carlo Tree Search Based Feature Selection
by Muhammad Umar Chaudhry and Jee-Hyong Lee
Entropy 2018, 20(5), 385; https://doi.org/10.3390/e20050385 - 20 May 2018
Cited by 14 | Viewed by 5147
Abstract
Given the increasing size and complexity of datasets needed to train machine learning algorithms, it is necessary to reduce the number of features required to achieve high classification accuracy. This paper presents a novel and efficient approach based on the Monte Carlo Tree [...] Read more.
Given the increasing size and complexity of datasets needed to train machine learning algorithms, it is necessary to reduce the number of features required to achieve high classification accuracy. This paper presents a novel and efficient approach based on the Monte Carlo Tree Search (MCTS) to find the optimal feature subset through the feature space. The algorithm searches for the best feature subset by combining the benefits of tree search with random sampling. Starting from an empty node, the tree is incrementally built by adding nodes representing the inclusion or exclusion of the features in the feature space. Every iteration leads to a feature subset following the tree and default policies. The accuracy of the classifier on the feature subset is used as the reward and propagated backwards to update the tree. Finally, the subset with the highest reward is chosen as the best feature subset. The efficiency and effectiveness of the proposed method is validated by experimenting on many benchmark datasets. The results are also compared with significant methods in the literature, which demonstrates the superiority of the proposed method. Full article
Show Figures

Figure 1

16 pages, 1309 KiB  
Article
Identification of Auditory Object-Specific Attention from Single-Trial Electroencephalogram Signals via Entropy Measures and Machine Learning
by Yun Lu, Mingjiang Wang, Qiquan Zhang and Yufei Han
Entropy 2018, 20(5), 386; https://doi.org/10.3390/e20050386 - 21 May 2018
Cited by 12 | Viewed by 4001
Abstract
Existing research has revealed that auditory attention can be tracked from ongoing electroencephalography (EEG) signals. The aim of this novel study was to investigate the identification of peoples’ attention to a specific auditory object from single-trial EEG signals via entropy measures and machine [...] Read more.
Existing research has revealed that auditory attention can be tracked from ongoing electroencephalography (EEG) signals. The aim of this novel study was to investigate the identification of peoples’ attention to a specific auditory object from single-trial EEG signals via entropy measures and machine learning. Approximate entropy (ApEn), sample entropy (SampEn), composite multiscale entropy (CmpMSE) and fuzzy entropy (FuzzyEn) were used to extract the informative features of EEG signals under three kinds of auditory object-specific attention (Rest, Auditory Object1 Attention (AOA1) and Auditory Object2 Attention (AOA2)). The linear discriminant analysis and support vector machine (SVM), were used to construct two auditory attention classifiers. The statistical results of entropy measures indicated that there were significant differences in the values of ApEn, SampEn, CmpMSE and FuzzyEn between Rest, AOA1 and AOA2. For the SVM-based auditory attention classifier, the auditory object-specific attention of Rest, AOA1 and AOA2 could be identified from EEG signals using ApEn, SampEn, CmpMSE and FuzzyEn as features and the identification rates were significantly different from chance level. The optimal identification was achieved by the SVM-based auditory attention classifier using CmpMSE with the scale factor τ = 10. This study demonstrated a novel solution to identify the auditory object-specific attention from single-trial EEG signals without the need to access the auditory stimulus. Full article
Show Figures

Figure 1

14 pages, 4733 KiB  
Article
Research on Weak Fault Extraction Method for Alleviating the Mode Mixing of LMD
by Lin Zhang, Zhijian Wang and Long Quan
Entropy 2018, 20(5), 387; https://doi.org/10.3390/e20050387 - 21 May 2018
Cited by 18 | Viewed by 3694
Abstract
Compared with the strong background noise, the energy entropy of early fault signals of bearings are weak under actual working conditions. Therefore, extracting the bearings’ early fault features has always been a major difficulty in fault diagnosis of rotating machinery. Based on the [...] Read more.
Compared with the strong background noise, the energy entropy of early fault signals of bearings are weak under actual working conditions. Therefore, extracting the bearings’ early fault features has always been a major difficulty in fault diagnosis of rotating machinery. Based on the above problems, the masking method is introduced into the Local Mean Decomposition (LMD) decomposition process, and a weak fault extraction method based on LMD and mask signal (MS) is proposed. Due to the mode mixing of the product function (PF) components decomposed by LMD in the noisy background, it is difficult to distinguish the authenticity of the fault frequency. Therefore, the MS method is introduced to deal with the PF components that are decomposed by the LMD and have strong correlation with the original signal, so as to suppress the modal aliasing phenomenon and extract the fault frequencies. In this paper, the actual fault signal of the rolling bearing is analyzed. By combining the MS method with the LMD method, the fault signal mixed with the noise is processed. The kurtosis value at the fault frequency is increased by eight-fold, and the signal-to-noise ratio (SNR) is increased by 19.1%. The fault signal is successfully extracted by the proposed composite method. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Show Figures

Figure 1

19 pages, 9842 KiB  
Article
Teager Energy Entropy Ratio of Wavelet Packet Transform and Its Application in Bearing Fault Diagnosis
by Shuting Wan and Xiong Zhang
Entropy 2018, 20(5), 388; https://doi.org/10.3390/e20050388 - 21 May 2018
Cited by 40 | Viewed by 4291
Abstract
Kurtogram can adaptively select the resonant frequency band, and then the characteristic fault frequency can be obtained by analyzing the selected band. However, the kurtogram is easily affected by random impulses and noise. In recent years, improvements to kurtogram have been concentrated on [...] Read more.
Kurtogram can adaptively select the resonant frequency band, and then the characteristic fault frequency can be obtained by analyzing the selected band. However, the kurtogram is easily affected by random impulses and noise. In recent years, improvements to kurtogram have been concentrated on two aspects: (a) the decomposition method of the frequency band; and (b) the selection index of the optimal frequency band. In this article, a new method called Teager Energy Entropy Ratio Gram (TEERgram) is proposed. The TEER algorithm takes the wavelet packet transform (WPT) as the signal frequency band decomposition method, which can adaptively segment the frequency band and control the noise. At the same time, Teager Energy Entropy Ratio (TEER) is proposed as a computing index for wavelet packet subbands. WPT has better decomposition properties than traditional finite impulse response (FIR) filtering and Fourier decomposition in the kurtogram algorithm. At the same time, TEER has better performance than the envelope spectrum or even the square envelope spectrum. Therefore, the TEERgram method can accurately identify the resonant frequency band under strong background noise. The effectiveness of the proposed method is verified by simulation and experimental analysis. Full article
Show Figures

Figure 1

17 pages, 2532 KiB  
Article
Identification of Pulmonary Hypertension Using Entropy Measure Analysis of Heart Sound Signal
by Hong Tang, Yuanlin Jiang, Ting Li and Xinpei Wang
Entropy 2018, 20(5), 389; https://doi.org/10.3390/e20050389 - 21 May 2018
Cited by 10 | Viewed by 3281
Abstract
This study introduced entropy measures to analyze the heart sound signals of people with and without pulmonary hypertension (PH). The lead II Electrocardiography (ECG) signal and heart sound signal were simultaneously collected from 104 subjects aged between 22 and 89. Fifty of them [...] Read more.
This study introduced entropy measures to analyze the heart sound signals of people with and without pulmonary hypertension (PH). The lead II Electrocardiography (ECG) signal and heart sound signal were simultaneously collected from 104 subjects aged between 22 and 89. Fifty of them were PH patients and 54 were healthy. Eleven heart sound features were extracted and three entropy measures, namely sample entropy (SampEn), fuzzy entropy (FuzzyEn) and fuzzy measure entropy (FuzzyMEn) of the feature sequences were calculated. The Mann–Whitney U test was used to study the feature significance between the patient and health group. To reduce the age confounding factor, nine entropy measures were selected based on correlation analysis. Further, the probability density function (pdf) of a single selected entropy measure of both groups was constructed by kernel density estimation, as well as the joint pdf of any two and multiple selected entropy measures. Therefore, a patient or a healthy subject can be classified using his/her entropy measure probability based on Bayes’ decision rule. The results showed that the best identification performance by a single selected measure had sensitivity of 0.720 and specificity of 0.648. The identification performance was improved to 0.680, 0.796 by the joint pdf of two measures and 0.740, 0.870 by the joint pdf of multiple measures. This study showed that entropy measures could be a powerful tool for early screening of PH patients. Full article
Show Figures

Figure 1

12 pages, 333 KiB  
Article
End-to-End Deep Neural Networks and Transfer Learning for Automatic Analysis of Nation-State Malware
by Ishai Rosenberg, Guillaume Sicard and Eli (Omid) David
Entropy 2018, 20(5), 390; https://doi.org/10.3390/e20050390 - 22 May 2018
Cited by 14 | Viewed by 6374
Abstract
Malware allegedly developed by nation-states, also known as advanced persistent threats (APT), are becoming more common. The task of attributing an APT to a specific nation-state or classifying it to the correct APT family is challenging for several reasons. First, each nation-state has [...] Read more.
Malware allegedly developed by nation-states, also known as advanced persistent threats (APT), are becoming more common. The task of attributing an APT to a specific nation-state or classifying it to the correct APT family is challenging for several reasons. First, each nation-state has more than a single cyber unit that develops such malware, rendering traditional authorship attribution algorithms useless. Furthermore, the dataset of such available APTs is still extremely small. Finally, those APTs use state-of-the-art evasion techniques, making feature extraction challenging. In this paper, we use a deep neural network (DNN) as a classifier for nation-state APT attribution. We record the dynamic behavior of the APT when run in a sandbox and use it as raw input for the neural network, allowing the DNN to learn high level feature abstractions of the APTs itself. We also use the same raw features for APT family classification. Finally, we use the feature abstractions learned by the APT family classifier to solve the attribution problem. Using a test set of 1000 Chinese and Russian developed APTs, we achieved an accuracy rate of 98.6% Full article
Show Figures

Figure 1

9 pages, 405 KiB  
Article
Maxwell’s Demon and the Problem of Observers in General Relativity
by Luis Herrera
Entropy 2018, 20(5), 391; https://doi.org/10.3390/e20050391 - 22 May 2018
Cited by 3 | Viewed by 3458
Abstract
The fact that real dissipative (entropy producing) processes may be detected by non-comoving observers (tilted), in systems that appear to be isentropic for comoving observers, in general relativity, is explained in terms of the information theory, analogous with the explanation of the Maxwell’s [...] Read more.
The fact that real dissipative (entropy producing) processes may be detected by non-comoving observers (tilted), in systems that appear to be isentropic for comoving observers, in general relativity, is explained in terms of the information theory, analogous with the explanation of the Maxwell’s demon paradox. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research, Other

12 pages, 294 KiB  
Review
ϕ-Divergence in Contingency Table Analysis
by Maria Kateri
Entropy 2018, 20(5), 324; https://doi.org/10.3390/e20050324 - 27 Apr 2018
Cited by 12 | Viewed by 3029
Abstract
The ϕ -divergence association models for two-way contingency tables is a family of models that includes the association and correlation models as special cases. We present this family of models, discussing its features and demonstrating the role of ϕ -divergence in building this [...] Read more.
The ϕ -divergence association models for two-way contingency tables is a family of models that includes the association and correlation models as special cases. We present this family of models, discussing its features and demonstrating the role of ϕ -divergence in building this family. The most parsimonious member of this family, the model of ϕ -scaled uniform local association, is considered in detail. It is implemented and representative examples are commented on. Full article
23 pages, 5735 KiB  
Review
Levitated Nanoparticles for Microscopic Thermodynamics—A Review
by Jan Gieseler and James Millen
Entropy 2018, 20(5), 326; https://doi.org/10.3390/e20050326 - 28 Apr 2018
Cited by 69 | Viewed by 9440
Abstract
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we [...] Read more.
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we review the application of levitated nanoparticles as a new experimental platform to explore stochastic thermodynamics in small systems. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

Other

5 pages, 216 KiB  
Reply
Maximum Entropy Theory of Ecology: A Reply to Harte
by Marco Favretti
Entropy 2018, 20(5), 308; https://doi.org/10.3390/e20050308 - 24 Apr 2018
Cited by 8 | Viewed by 3659
Abstract
In a paper published in this journal, I addressed the following problem: under which conditions will two scientists, observing the same system and sharing the same initial information, reach the same probabilistic description upon the application of the Maximum Entropy inference principle (MaxEnt) [...] Read more.
In a paper published in this journal, I addressed the following problem: under which conditions will two scientists, observing the same system and sharing the same initial information, reach the same probabilistic description upon the application of the Maximum Entropy inference principle (MaxEnt) independent of the probability distribution chosen to set up the MaxEnt procedure. This is a minimal objectivity requirement which is generally asked for scientific investigation. In the same paper, I applied the findings to a critical examination of the application of MaxEnt made in Harte’s Maximum Entropy Theory of Ecology (METE). Prof. Harte published a comment to my paper and this is my reply. For the sake of the reader who may be unaware of the content of the papers, I have tried to make this reply self-contained and to skip technical details. However, I invite the interested reader to consult the previously published papers. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Previous Issue
Next Issue
Back to TopTop