Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 21, Issue 7 (July 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-82
Export citation of selected articles as:
Open AccessArticle
Refined Multiscale Entropy Using Fuzzy Metrics: Validation and Application to Nociception Assessment
Entropy 2019, 21(7), 706; https://doi.org/10.3390/e21070706 (registering DOI)
Received: 2 July 2019 / Revised: 15 July 2019 / Accepted: 16 July 2019 / Published: 18 July 2019
PDF Full-text (1487 KB)
Abstract
The refined multiscale entropy (RMSE) approach is commonly applied to assess complexity as a function of the time scale. RMSE is normally based on the computation of sample entropy (SampEn) estimating complexity as conditional entropy. However, SampEn is dependent on the length and [...] Read more.
The refined multiscale entropy (RMSE) approach is commonly applied to assess complexity as a function of the time scale. RMSE is normally based on the computation of sample entropy (SampEn) estimating complexity as conditional entropy. However, SampEn is dependent on the length and standard deviation of the data. Recently, fuzzy entropy (FuzEn) has been proposed, including several refinements, as an alternative to counteract these limitations. In this work, FuzEn, translated FuzEn (TFuzEn), translated-reflected FuzEn (TRFuzEn), inherent FuzEn (IFuzEn), and inherent translated FuzEn (ITFuzEn) were exploited as entropy-based measures in the computation of RMSE and their performance was compared to that of SampEn. FuzEn metrics were applied to synthetic time series of different lengths to evaluate the consistency of the different approaches. In addition, electroencephalograms of patients under sedation-analgesia procedure were analyzed based on the patient’s response after the application of painful stimulation, such as nail bed compression or endoscopy tube insertion. Significant differences in FuzEn metrics were observed over simulations and real data as a function of the data length and the pain responses. Findings indicated that FuzEn, when exploited in RMSE applications, showed similar behavior to SampEn in long series, but its consistency was better than that of SampEn in short series both over simulations and real data. Conversely, its variants should be utilized with more caution, especially whether processes exhibit an important deterministic component and/or in nociception prediction at long scales. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Open AccessArticle
Quantum Features of Macroscopic Fields: Entropy and Dynamics
Entropy 2019, 21(7), 705; https://doi.org/10.3390/e21070705 (registering DOI)
Received: 12 April 2019 / Revised: 12 May 2019 / Accepted: 17 July 2019 / Published: 18 July 2019
PDF Full-text (251 KB)
Abstract
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random [...] Read more.
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random scattering of waves by environment. The proposed reduced state of the field combines averaged field with the two-point correlation function called single-particle density matrix. The evolution equation for the reduced state of the field is obtained by reduction of the generalized quasi-free dynamical semigroups describing irreversible evolution of bosonic quantum field and the definition of entropy for the reduced state of the field follows from the von Neumann entropy of quantum field states. The presented formalism can be applied, for example, to superradiance phenomena and allows unifying the Mueller and Jones calculi in polarization optics. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Open AccessArticle
Thermodynamics and Stability of Non-Equilibrium Steady States in Open Systems
Entropy 2019, 21(7), 704; https://doi.org/10.3390/e21070704 (registering DOI)
Received: 7 June 2019 / Revised: 12 July 2019 / Accepted: 17 July 2019 / Published: 18 July 2019
PDF Full-text (511 KB) | HTML Full-text | XML Full-text
Abstract
Thermodynamical arguments are known to be useful in the construction of physically motivated Lyapunov functionals for nonlinear stability analysis of spatially homogeneous equilibrium states in thermodynamically isolated systems. Unfortunately, the limitation to isolated systems is essential, and standard arguments are not applicable even [...] Read more.
Thermodynamical arguments are known to be useful in the construction of physically motivated Lyapunov functionals for nonlinear stability analysis of spatially homogeneous equilibrium states in thermodynamically isolated systems. Unfortunately, the limitation to isolated systems is essential, and standard arguments are not applicable even for some very simple thermodynamically open systems. On the other hand, the nonlinear stability of thermodynamically open systems is usually investigated using the so-called energy method. The mathematical quantity that is referred to as the “energy” is, however, in most cases not linked to the energy in the physical sense of the word. Consequently, it would seem that genuine thermo-dynamical concepts are of no use in the nonlinear stability analysis of thermodynamically open systems. We show that this is not the case. In particular, we propose a construction that in the case of a simple heat conduction problem leads to a physically well-motivated Lyapunov type functional, which effectively replaces the artificial Lyapunov functional used in the standard energy method. The proposed construction seems to be general enough to be applied in complex thermomechanical settings. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Figures

Figure 1

Open AccessArticle
Information Geometrical Characterization of Quantum Statistical Models in Quantum Estimation Theory
Entropy 2019, 21(7), 703; https://doi.org/10.3390/e21070703 (registering DOI)
Received: 10 June 2019 / Revised: 5 July 2019 / Accepted: 16 July 2019 / Published: 18 July 2019
PDF Full-text (300 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent [...] Read more.
In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent conditions and discuss their properties. This result enables us to explore the relationships among these four models as well as reveals the geometrical understanding of quantum statistical models. In particular, we show that each class of model can be identified by comparing quantum Fisher metrics and the properties of the tangent spaces of the quantum statistical model. Full article
(This article belongs to the Section Quantum Information)
Figures

Figure 1

Open AccessArticle
Model Description of Similarity-Based Recommendation Systems
Entropy 2019, 21(7), 702; https://doi.org/10.3390/e21070702
Received: 4 June 2019 / Revised: 30 June 2019 / Accepted: 11 July 2019 / Published: 17 July 2019
Viewed by 125 | PDF Full-text (336 KB) | HTML Full-text | XML Full-text
Abstract
The quality of online services highly depends on the accuracy of the recommendations they can provide to users. Researchers have proposed various similarity measures based on the assumption that similar people like or dislike similar items or people, in order to improve the [...] Read more.
The quality of online services highly depends on the accuracy of the recommendations they can provide to users. Researchers have proposed various similarity measures based on the assumption that similar people like or dislike similar items or people, in order to improve the accuracy of their services. Additionally, statistical models, such as the stochastic block models, have been used to understand network structures. In this paper, we discuss the relationship between similarity-based methods and statistical models using the Bernoulli mixture models and the expectation-maximization (EM) algorithm. The Bernoulli mixture model naturally leads to a completely positive matrix as the similarity matrix. We prove that most of the commonly used similarity measures yield completely positive matrices as the similarity matrix. Based on this relationship, we propose an algorithm to transform the similarity matrix to the Bernoulli mixture model. Such a correspondence provides a statistical interpretation to similarity-based methods. Using this algorithm, we conduct numerical experiments using synthetic data and real-world data provided from an online dating site, and report the efficiency of the recommendation system based on the Bernoulli mixture models. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
A Security Enhanced Encryption Scheme and Evaluation of Its Cryptographic Security
Entropy 2019, 21(7), 701; https://doi.org/10.3390/e21070701
Received: 18 June 2019 / Revised: 7 July 2019 / Accepted: 15 July 2019 / Published: 17 July 2019
Viewed by 131 | PDF Full-text (615 KB) | HTML Full-text | XML Full-text
Abstract
An approach for security enhancement of a class of encryption schemes is pointed out and its security is analyzed. The approach is based on certain results of coding and information theory regarding communication channels with erasures and deletion errors. In the security enhanced [...] Read more.
An approach for security enhancement of a class of encryption schemes is pointed out and its security is analyzed. The approach is based on certain results of coding and information theory regarding communication channels with erasures and deletion errors. In the security enhanced encryption scheme, the wiretapper faces a problem of cryptanalysis after a communication channel with bits deletion and a legitimate party faces a problem of decryption after a channel with bit erasures. This paper proposes the encryption-decryption paradigm for the security enhancement of lightweight block ciphers based on dedicated error-correction coding and a simulator of the deletion channel controlled by the secret key. The security enhancement is analyzed in terms of the related probabilities, equivocation, mutual information and channel capacity. The cryptographic evaluation of the enhanced encryption includes employment of certain recent results regarding the upper-bounds on the capacity of channels with deletion errors. It is shown that the probability of correct classification which determines the cryptographic security depends on the deletion channel capacity, i.e., the equivocation after this channel, and number of codewords in employed error-correction coding scheme. Consequently, assuming that the basic encryption scheme has certain security level, it is shown that the security enhancement factor is a function of the deletion rate and dimension of the vectors subject to error-correction encoding, i.e., dimension of the encryption block. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Figures

Figure 1

Open AccessArticle
Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
Entropy 2019, 21(7), 700; https://doi.org/10.3390/e21070700
Received: 30 May 2019 / Revised: 7 July 2019 / Accepted: 11 July 2019 / Published: 16 July 2019
Viewed by 170 | PDF Full-text (351 KB)
Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a [...] Read more.
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Multivariate Pointwise Information-Driven Data Sampling and Visualization
Entropy 2019, 21(7), 699; https://doi.org/10.3390/e21070699
Received: 8 May 2019 / Revised: 25 June 2019 / Accepted: 6 July 2019 / Published: 16 July 2019
Viewed by 165 | PDF Full-text (25441 KB) | HTML Full-text | XML Full-text
Abstract
With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data [...] Read more.
With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data properties so that the reduced data can answer domain-specific queries involving multiple variables with sufficient accuracy. While analyzing complex scientific events, domain experts often analyze and visualize two or more variables together to obtain a better understanding of the characteristics of the data features. Therefore, data summarization techniques are required to analyze multi-variable relationships in detail and then perform data reduction such that the important features involving multiple variables are preserved in the reduced data. To achieve this, in this work, we propose a data sub-sampling algorithm for performing statistical data summarization that leverages pointwise information theoretic measures to quantify the statistical association of data points considering multiple variables and generates a sub-sampled data that preserves the statistical association among multi-variables. Using such reduced sampled data, we show that multivariate feature query and analysis can be done effectively. The efficacy of the proposed multivariate association driven sampling algorithm is presented by applying it on several scientific data sets. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Figures

Figure 1

Open AccessArticle
Increased Sample Entropy in EEGs During the Functional Rehabilitation of an Injured Brain
Entropy 2019, 21(7), 698; https://doi.org/10.3390/e21070698
Received: 10 May 2019 / Revised: 3 July 2019 / Accepted: 6 July 2019 / Published: 16 July 2019
Viewed by 134 | PDF Full-text (4038 KB) | HTML Full-text | XML Full-text
Abstract
Complex nerve remodeling occurs in the injured brain area during functional rehabilitation after a brain injury; however, its mechanism has not been thoroughly elucidated. Neural remodeling can lead to changes in the electrophysiological activity, which can be detected in an electroencephalogram (EEG). In [...] Read more.
Complex nerve remodeling occurs in the injured brain area during functional rehabilitation after a brain injury; however, its mechanism has not been thoroughly elucidated. Neural remodeling can lead to changes in the electrophysiological activity, which can be detected in an electroencephalogram (EEG). In this paper, we used EEG band energy, approximate entropy (ApEn), sample entropy (SampEn), and Lempel–Ziv complexity (LZC) features to characterize the intrinsic rehabilitation dynamics of the injured brain area, thus providing a means of detecting and exploring the mechanism of neurological remodeling during the recovery process after brain injury. The rats in the injury group (n = 12) and sham group (n = 12) were used to record the bilateral symmetrical EEG on days 1, 4, and 7 after a unilateral brain injury in awake model rats. The open field test (OFT) experiments were performed in the following three groups: an injury group, a sham group, and a control group (n = 10). An analysis of the EEG data using the energy, ApEn, SampEn, and LZC features demonstrated that the increase in SampEn was associated with the functional recovery. After the brain injury, the energy values of the delta1 bands on day 4; the delta2 bands on days 4 and 7; the theta, alpha, and beta bands and the values of ApEn, SampEn, and LZC of the cortical EEG signal on days 1, 4 and 7 were significantly lower in the injured brain area than in the non-injured area. During the process of recovery for the injured brain area, the values of the beta bands, ApEn, and SampEn of the injury group increased significantly, and gradually became equal to the value of the sham group. The improvement in the motor function of the model rats significantly correlated with the increase in SampEn. This study provides a method based on EEG nonlinear features for measuring neural remodeling in injured brain areas during brain function recovery. The results may aid in the study of neural remodeling mechanisms. Full article
Figures

Figure 1

Open AccessArticle
Entropy and Semi-Entropies of LR Fuzzy Numbers’ Linear Function with Applications to Fuzzy Programming
Entropy 2019, 21(7), 697; https://doi.org/10.3390/e21070697
Received: 14 June 2019 / Revised: 11 July 2019 / Accepted: 12 July 2019 / Published: 16 July 2019
Viewed by 149 | PDF Full-text (554 KB) | HTML Full-text | XML Full-text
Abstract
As a crucial concept of characterizing uncertainty, entropy has been widely used in fuzzy programming problems, while involving complicated calculations. To simplify the operations so as to broaden its applicable areas, this paper investigates the entropy within the framework of credibility theory and [...] Read more.
As a crucial concept of characterizing uncertainty, entropy has been widely used in fuzzy programming problems, while involving complicated calculations. To simplify the operations so as to broaden its applicable areas, this paper investigates the entropy within the framework of credibility theory and derives the formulas for calculating the entropy of regular LR fuzzy numbers by virtue of the inverse credibility distribution. By verifying the favorable property of this operator, a calculation formula of a linear function’s entropy is also proposed. Furthermore, considering the strength of semi-entropy in measuring one-side uncertainty, the lower and upper semi-entropies, as well as the corresponding formulas are suggested to handle return-oriented and cost-oriented problems, respectively. Finally, utilizing entropy and semi-entropies as risk measures, two types of entropy optimization models and their equivalent formulations derived from the proposed formulas are given according to different decision criteria, providing an effective modeling method for fuzzy programming from the perspective of entropy. The numerical examples demonstrate the high efficiency and good performance of the proposed methods in decision making. Full article
(This article belongs to the Section Multidisciplinary Applications)
Figures

Figure 1

Open AccessReview
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
Entropy 2019, 21(7), 696; https://doi.org/10.3390/e21070696
Received: 17 June 2019 / Accepted: 28 June 2019 / Published: 15 July 2019
Viewed by 205 | PDF Full-text (3375 KB)
Abstract
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical [...] Read more.
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical mechanics and its basic additive entropy S B G started, in recent decades, to exhibit failures or inadequacies in an increasing number of complex systems. The emergence of such intriguing features became apparent in quantum systems as well, such as black holes and other area-law-like scenarios for the von Neumann entropy. In a different arena, the efficiency of the Shannon entropy—as the BG functional is currently called in engineering and communication theory—started to be perceived as not necessarily optimal in the processing of images (e.g., medical ones) and time series (e.g., economic ones). Such is the case in the presence of generic long-range space correlations, long memory, sub-exponential sensitivity to the initial conditions (hence vanishing largest Lyapunov exponents), and similar features. Finally, we witnessed, during the last two decades, an explosion of asymptotically scale-free complex networks. This wide range of important systems eventually gave support, since 1988, to the generalization of the BG theory. Nonadditive entropies generalizing the BG one and their consequences have been introduced and intensively studied worldwide. The present review focuses on these concepts and their predictions, verifications, and applications in physics and elsewhere. Some selected examples (in quantum information, high- and low-energy physics, low-dimensional nonlinear dynamical systems, earthquakes, turbulence, long-range interacting systems, and scale-free networks) illustrate successful applications. The grounding thermodynamical framework is briefly described as well. Full article
(This article belongs to the Section Entropy Reviews)
Open AccessArticle
A Method for Improving Controlling Factors Based on Information Fusion for Debris Flow Susceptibility Mapping: A Case Study in Jilin Province, China
Entropy 2019, 21(7), 695; https://doi.org/10.3390/e21070695
Received: 16 May 2019 / Revised: 12 June 2019 / Accepted: 12 July 2019 / Published: 15 July 2019
Viewed by 179 | PDF Full-text (4877 KB) | HTML Full-text | XML Full-text
Abstract
Debris flow is one of the most frequently occurring geological disasters in Jilin province, China, and such disasters often result in the loss of human life and property. The objective of this study is to propose and verify an information fusion (IF) method [...] Read more.
Debris flow is one of the most frequently occurring geological disasters in Jilin province, China, and such disasters often result in the loss of human life and property. The objective of this study is to propose and verify an information fusion (IF) method in order to improve the factors controlling debris flow as well as the accuracy of the debris flow susceptibility map. Nine layers of factors controlling debris flow (i.e., topography, elevation, annual precipitation, distance to water system, slope angle, slope aspect, population density, lithology and vegetation coverage) were taken as the predictors. The controlling factors were improved by using the IF method. Based on the original controlling factors and the improved controlling factors, debris flow susceptibility maps were developed while using the statistical index (SI) model, the analytic hierarchy process (AHP) model, the random forest (RF) model, and their four integrated models. The results were compared using receiver operating characteristic (ROC) curve, and the spatial consistency of the debris flow susceptibility maps was analyzed while using Spearman’s rank correlation coefficients. The results show that the IF method that was used to improve the controlling factors can effectively enhance the performance of the debris flow susceptibility maps, with the IF-SI-RF model exhibiting the best performance in terms of debris flow susceptibility mapping. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering II)
Figures

Figure 1

Open AccessArticle
Twenty Years of Entropy Research: A Bibliometric Overview
Entropy 2019, 21(7), 694; https://doi.org/10.3390/e21070694
Received: 23 May 2019 / Revised: 8 July 2019 / Accepted: 10 July 2019 / Published: 15 July 2019
Viewed by 162 | PDF Full-text (38676 KB) | HTML Full-text | XML Full-text
Abstract
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its [...] Read more.
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its birthday gift. In accordance with Entropy’s distinctive name and research area, this paper creatively provides a bibliometric analysis method to not only look back at the vicissitude of the entire entropy topic, but also witness the journal’s growth and influence during this process. Based on 123,063 records extracted from the Web of Science, the work in sequence analyzes publication outputs, high-cited literature, and reference co-citation networks, in the aspects of the topic and the journal, respectively. The results indicate that the topic now has become a tremendous research domain and is still roaring ahead with great potentiality, widely researched by different kinds of disciplines. The most significant hotspots so far are suggested as the theoretical or practical innovation of graph entropy, permutation entropy, and pseudo-additive entropy. Furthermore, with the rapid growth in recent years, Entropy has attracted many dominant authors of the topic and experiences a distinctive geographical publication distribution. More importantly, in the midst of the topic, the journal has made enormous contributions to major research areas, particularly being a spear head in the studies of multiscale entropy and permutation entropy. Full article
(This article belongs to the Section Entropy Reviews)
Figures

Figure 1

Open AccessArticle
A Feature Extraction Method of Ship-Radiated Noise Based on Fluctuation-Based Dispersion Entropy and Intrinsic Time-Scale Decomposition
Entropy 2019, 21(7), 693; https://doi.org/10.3390/e21070693
Received: 20 June 2019 / Revised: 12 July 2019 / Accepted: 13 July 2019 / Published: 15 July 2019
Viewed by 169 | PDF Full-text (11215 KB) | HTML Full-text | XML Full-text
Abstract
To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it [...] Read more.
To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it cannot distinguish different types of ship-radiated noise effectively, a new method of ship-radiated noise feature extraction is proposed based on fluctuation-based dispersion entropy (FDispEn) and intrinsic time-scale decomposition (ITD). Firstly, ten types of ship-radiated noise signals are decomposed into a series of proper rotation components (PRCs) by ITD, and the FDispEn of each PRC is calculated. Then, the correlation between each PRC and the original signal are calculated, and the FDispEn of each PRC is analyzed to select the Max-relative PRC fluctuation-based dispersion entropy as the feature parameter. Finally, by comparing the Max-relative PRC fluctuation-based dispersion entropy of a certain number of the above ten types of ship-radiated noise signals with FDispEn, it is discovered that the Max-relative PRC fluctuation-based dispersion entropy is at the same level for similar ship-radiated noise, but is distinct for different types of ship-radiated noise. The Max-relative PRC fluctuation-based dispersion entropy as the feature vector is sent into the support vector machine (SVM) classifier to classify and recognize ten types of ship-radiated noise. The experimental results demonstrate that the recognition rate of the proposed method reaches 95.8763%. Consequently, the proposed method can effectively achieve the classification of ship-radiated noise. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics)
Figures

Figure 1

Open AccessArticle
Why the Tsirelson Bound? Bub’s Question and Fuchs’ Desideratum
Entropy 2019, 21(7), 692; https://doi.org/10.3390/e21070692
Received: 30 June 2019 / Revised: 7 July 2019 / Accepted: 12 July 2019 / Published: 15 July 2019
Viewed by 169 | PDF Full-text (1182 KB) | HTML Full-text | XML Full-text
Abstract
To answer Wheeler’s question “Why the quantum?” via quantum information theory according to Bub, one must explain both why the world is quantum rather than classical and why the world is quantum rather than superquantum, i.e., “Why the Tsirelson bound?” We show that [...] Read more.
To answer Wheeler’s question “Why the quantum?” via quantum information theory according to Bub, one must explain both why the world is quantum rather than classical and why the world is quantum rather than superquantum, i.e., “Why the Tsirelson bound?” We show that the quantum correlations and quantum states corresponding to the Bell basis states, which uniquely produce the Tsirelson bound for the Clauser–Horne–Shimony–Holt (CHSH) quantity, can be derived from conservation per no preferred reference frame (NPRF). A reference frame in this context is defined by a measurement configuration, just as with the light postulate of special relativity. We therefore argue that the Tsirelson bound is ultimately based on NPRF just as the postulates of special relativity. This constraint-based/principle answer to Bub’s question addresses Fuchs’ desideratum that we “take the structure of quantum theory and change it from this very overt mathematical speak ... into something like [special relativity].” Thus, the answer to Bub’s question per Fuchs’ desideratum is, “the Tsirelson bound obtains due to conservation per NPRF”. Full article
(This article belongs to the Section Quantum Information)
Figures

Figure 1

Open AccessArticle
An Information Entropy-Based Modeling Method for the Measurement System
Entropy 2019, 21(7), 691; https://doi.org/10.3390/e21070691
Received: 15 June 2019 / Revised: 6 July 2019 / Accepted: 12 July 2019 / Published: 15 July 2019
Viewed by 168 | PDF Full-text (3243 KB) | HTML Full-text | XML Full-text
Abstract
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are [...] Read more.
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are too abstract. To a certain extent, this makes it difficult to have a clear overall understanding of measurement systems and how to implement information acquisition. Meanwhile, this also leads to limitations in the application of these models. Information entropy is a measure of information or uncertainty of a random variable and has strong representation ability. In this paper, an information entropy-based modeling method for measurement system is proposed. First, a modeling idea based on the viewpoint of information and uncertainty is described. Second, an entropy balance equation based on the chain rule for entropy is proposed for system modeling. Then, the entropy balance equation is used to establish the information entropy-based model of the measurement system. Finally, three cases of typical measurement units or processes are analyzed using the proposed method. Compared with the existing modeling approaches, the proposed method considers the modeling problem from the perspective of information and uncertainty. It focuses on the information loss of the measurand in the transmission process and the characterization of the specific role of the measurement unit. The proposed model can intuitively describe the processing and changes of information in the measurement system. It does not conflict with the existing models of the measurement system, but can complement the existing models of measurement systems, thus further enriching the existing measurement theory. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
A Sequence-Based Damage Identification Method for Composite Rotors by Applying the Kullback–Leibler Divergence, a Two-Sample Kolmogorov–Smirnov Test and a Statistical Hidden Markov Model
Entropy 2019, 21(7), 690; https://doi.org/10.3390/e21070690
Received: 8 June 2019 / Revised: 9 July 2019 / Accepted: 10 July 2019 / Published: 15 July 2019
Viewed by 206 | PDF Full-text (1500 KB) | HTML Full-text | XML Full-text
Abstract
Composite structures undergo a gradual damage evolution from initial inter-fibre cracks to extended damage up to failure. However, most composites could remain in service despite the existence of damage. Prerequisite for a service extension is a reliable and component-specific damage identification. Therefore, a [...] Read more.
Composite structures undergo a gradual damage evolution from initial inter-fibre cracks to extended damage up to failure. However, most composites could remain in service despite the existence of damage. Prerequisite for a service extension is a reliable and component-specific damage identification. Therefore, a vibration-based damage identification method is presented that takes into consideration the gradual damage behaviour and the resulting changes of the structural dynamic behaviour of composite rotors. These changes are transformed into a sequence of distinct states and used as an input database for three diagnostic models, based on the Kullback–Leibler divergence, the two-sample Kolmogorov–Smirnov test and a statistical hidden Markov model. To identify the present damage state based on the damage-dependent modal properties, a sequence-based diagnostic system has been developed, which estimates the similarity between the present unclassified sequence and obtained sequences of damage-dependent vibration responses. The diagnostic performance evaluation delivers promising results for the further development of the proposed diagnostic method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Figures

Figure 1

Open AccessArticle
Heat Transfer Coefficients Analysis in a Helical Double-Pipe Evaporator: Nusselt Number Correlations through Artificial Neural Networks
Entropy 2019, 21(7), 689; https://doi.org/10.3390/e21070689
Received: 28 May 2019 / Revised: 28 June 2019 / Accepted: 9 July 2019 / Published: 14 July 2019
Viewed by 207 | PDF Full-text (2181 KB) | HTML Full-text | XML Full-text
Abstract
In this study, two empirical correlations of the Nusselt number, based on two artificial neural networks (ANN), were developed to determine the heat transfer coefficients for each section of a vertical helical double-pipe evaporator with water as the working fluid. Each ANN was [...] Read more.
In this study, two empirical correlations of the Nusselt number, based on two artificial neural networks (ANN), were developed to determine the heat transfer coefficients for each section of a vertical helical double-pipe evaporator with water as the working fluid. Each ANN was obtained using an experimental database of 1109 values obtained from an evaporator coupled to an absorption heat transformer with energy recycling. The Nusselt number in the annular section was estimated based on the modified Wilson plot method solved by an ANN. This model included the Reynolds and Prandtl numbers as input variables and three neurons in their hidden layer. The Nusselt number in the inner section was estimated based on the Rohsenow equation, solved by an ANN. This ANN model included the numbers of the Prandtl and Jackob liquids as input variables and one neuron in their hidden layer. The coefficients of determination were R 2 > 0.99 for both models. Both ANN models satisfied the dimensionless condition of the Nusselt number. The Levenberg–Marquardt algorithm was chosen to determine the optimum values of the weights and biases. The transfer functions used for the learning process were the hyperbolic tangent sigmoid in the hidden layer and the linear function in the output layer. The Nusselt numbers, determined by the ANNs, proved adequate to predict the values of the heat transfer coefficients of a vertical helical double-pipe evaporator that considered biphasic flow with an accuracy of ±0.2 for the annular Nusselt and ±4 for the inner Nusselt. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Figures

Figure 1

Open AccessArticle
A Hybrid Information Reconciliation Method for Physical Layer Key Generation
Entropy 2019, 21(7), 688; https://doi.org/10.3390/e21070688
Received: 31 May 2019 / Revised: 8 July 2019 / Accepted: 12 July 2019 / Published: 14 July 2019
Viewed by 196 | PDF Full-text (850 KB)
Abstract
Physical layer key generation (PKG) has become a research focus as it solves the key distribution problem, which is difficult in traditional cryptographic mechanisms. Information reconciliation is a critical process in PKG to obtain symmetric keys. Various reconciliation schemes have been proposed, including [...] Read more.
Physical layer key generation (PKG) has become a research focus as it solves the key distribution problem, which is difficult in traditional cryptographic mechanisms. Information reconciliation is a critical process in PKG to obtain symmetric keys. Various reconciliation schemes have been proposed, including the error detection protocol-based approach (EDPA) and error correction code-based approach (ECCA). Both EDPA and ECCA have advantages and drawbacks, regarding information leakage, interaction delay, and computation complexity. In this paper, we choose the BBBSS protocol from EDPA and BCH code from ECCA as a case study, analyzing their comprehensive efficiency performance versus pass number and bit disagreement ratio (BDR), respectively. Next, we integrate the strength of the two to design a new hybrid information reconciliation protocol (HIRP). The design of HIRP consists of three main phases, i.e., training, table lookup, and testing. To comprehensively evaluate the reconciliation schemes, we propose a novel efficiency metric to achieve a balance of corrected bits, information leakage, time delay, and computation time, which represents the effectively corrected bits per unit time. The simulation results show that our proposed method outperforms other reconciliation schemes to improve the comprehensive reconciliation efficiency. The average improvement in efficiency is 2.48 and 22.36 times over the BBBSS and BCH code, respectively, when the range of the BDR is from 0.5% to 11.5%. Compared to the BBBSS protocol and the BCH code, HIRP lies at a mid-level in terms of information leakage and computation time cost. Besides, with the lowest time delay cost, HIRP reaches the highest reconciliation efficiency. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Open AccessArticle
A Novel Method for Intelligent Single Fault Detection of Bearings Using SAE and Improved D–S Evidence Theory
Entropy 2019, 21(7), 687; https://doi.org/10.3390/e21070687
Received: 14 May 2019 / Revised: 6 July 2019 / Accepted: 10 July 2019 / Published: 13 July 2019
Viewed by 230 | PDF Full-text (7871 KB) | HTML Full-text | XML Full-text
Abstract
In order to realize single fault detection (SFD) from the multi-fault coupling bearing data and further research on the multi-fault situation of bearings, this paper proposes a method based on features self-extraction of a Sparse Auto-Encoder (SAE) and results fusion of improved Dempster–Shafer [...] Read more.
In order to realize single fault detection (SFD) from the multi-fault coupling bearing data and further research on the multi-fault situation of bearings, this paper proposes a method based on features self-extraction of a Sparse Auto-Encoder (SAE) and results fusion of improved Dempster–Shafer evidence theory (D–S). Multi-fault signal compression features of bearings were extracted by SAE on multiple vibration sensors’ data. Data sets were constructed by the extracted compression features to train the Support Vector Machine (SVM) according to the rule of single fault detection (R-SFD) this paper proposed. Fault detection results were obtained by the improved D–S evidence theory, which was implemented via correcting the 0 factor in the Basic Probability Assignment (BPA) and modifying the evidence weight by Pearson Correlation Coefficient (PCC). Extensive evaluations of the proposed method on the experiment platform datasets showed that the proposed method could realize single fault detection from multi-fault bearings. Fault detection accuracy increases as the output feature dimension of SAE increases; when the feature dimension reached 200, the average detection accuracy of the three sensors for bearing inner, outer, and ball faults achieved 87.36%, 87.86% and 84.46%, respectively. The three types’ fault detection accuracy—reached to 99.12%, 99.33% and 98.46% by the improved Dempster–Shafer evidence theory (IDS) to fuse the sensors’ results—is respectively 0.38%, 2.06% and 0.76% higher than the traditional D–S evidence theory. That indicated the effectiveness of improving the D–S evidence theory by evidence weight calculation of PCC. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
How Much Would You Pay to Change a Game before Playing It?
Entropy 2019, 21(7), 686; https://doi.org/10.3390/e21070686
Received: 18 June 2019 / Revised: 8 July 2019 / Accepted: 11 July 2019 / Published: 13 July 2019
Viewed by 202 | PDF Full-text (507 KB) | HTML Full-text | XML Full-text
Abstract
Envelope theorems provide a differential framework for determining how much a rational decision maker (DM) is willing to pay to alter the parameters of a strategic scenario. We generalize this framework to the case of a boundedly rational DM and arbitrary solution concepts. [...] Read more.
Envelope theorems provide a differential framework for determining how much a rational decision maker (DM) is willing to pay to alter the parameters of a strategic scenario. We generalize this framework to the case of a boundedly rational DM and arbitrary solution concepts. We focus on comparing and contrasting the case where DM’s decision to pay to change the parameters is observed by all other players against the case where DM’s decision is private information. We decompose DM’s willingness to pay a given amount into a sum of three factors: (1) the direct effect a parameter change would have on DM’s payoffs in the future strategic scenario, holding strategies of all players constant; (2) the effect due to DM changing its strategy as they react to a change in the game parameters, with the strategies of the other players in that scenario held constant; and (3) the effect there would be due to other players reacting to a the change in the game parameters (could they observe them), with the strategy of DM held constant. We illustrate these results with the quantal response equilibrium and the matching pennies game and discuss how the willingness to pay captures DM’s anticipation of their future irrationality. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
Thermodynamics of Fatigue: Degradation-Entropy Generation Methodology for System and Process Characterization and Failure Analysis
Entropy 2019, 21(7), 685; https://doi.org/10.3390/e21070685
Received: 14 May 2019 / Revised: 13 June 2019 / Accepted: 9 July 2019 / Published: 12 July 2019
Viewed by 237 | PDF Full-text (3664 KB) | HTML Full-text | XML Full-text
Abstract
Formulated is a new instantaneous fatigue model and predictor based on ab initio irreversible thermodynamics. The method combines the first and second laws of thermodynamics with the Helmholtz free energy, then applies the result to the degradation-entropy generation theorem to relate a desired [...] Read more.
Formulated is a new instantaneous fatigue model and predictor based on ab initio irreversible thermodynamics. The method combines the first and second laws of thermodynamics with the Helmholtz free energy, then applies the result to the degradation-entropy generation theorem to relate a desired fatigue measure—stress, strain, cycles or time to failure—to the loads, materials and environmental conditions (including temperature and heat) via the irreversible entropies generated by the dissipative processes that degrade the fatigued material. The formulations are then verified with fatigue data from the literature, for a steel shaft under bending and torsion. A near 100% agreement between the fatigue model and measurements is achieved. The model also introduces new material and design parameters to characterize fatigue. Full article
Figures

Graphical abstract

Open AccessArticle
Quantifying the Multiscale Predictability of Financial Time Series by an Information-Theoretic Approach
Entropy 2019, 21(7), 684; https://doi.org/10.3390/e21070684
Received: 10 June 2019 / Revised: 4 July 2019 / Accepted: 8 July 2019 / Published: 12 July 2019
Viewed by 203 | PDF Full-text (625 KB) | XML Full-text
Abstract
Making predictions on the dynamics of time series of a system is a very interesting topic. A fundamental prerequisite of this work is to evaluate the predictability of the system over a wide range of time. In this paper, we propose an information-theoretic [...] Read more.
Making predictions on the dynamics of time series of a system is a very interesting topic. A fundamental prerequisite of this work is to evaluate the predictability of the system over a wide range of time. In this paper, we propose an information-theoretic tool, multiscale entropy difference (MED), to evaluate the predictability of nonlinear financial time series on multiple time scales. We discuss the predictability of the isolated system and open systems, respectively. Evidence from the analysis of the logistic map, Hénon map, and the Lorenz system manifests that the MED method is accurate, robust, and has a wide range of applications. We apply the new method to five-minute high-frequency data and the daily data of Chinese stock markets. Results show that the logarithmic change of stock price (logarithmic return) has a lower possibility of being predicted than the volatility. The logarithmic change of trading volume contributes significantly to the prediction of the logarithmic change of stock price on multiple time scales. The daily data are found to have a larger possibility of being predicted than the five-minute high-frequency data. This indicates that the arbitrage opportunity exists in the Chinese stock markets, which thus cannot be approximated by the effective market hypothesis (EMH). Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Open AccessArticle
An Entropy Regularization k-Means Algorithm with a New Measure of between-Cluster Distance in Subspace Clustering
Entropy 2019, 21(7), 683; https://doi.org/10.3390/e21070683
Received: 15 April 2019 / Revised: 6 July 2019 / Accepted: 10 July 2019 / Published: 12 July 2019
Viewed by 189 | PDF Full-text (736 KB) | HTML Full-text | XML Full-text
Abstract
Although within-cluster information is commonly used in most clustering approaches, other important information such as between-cluster information is rarely considered in some cases. Hence, in this study, we propose a new novel measure of between-cluster distance in subspace, which is to maximize the [...] Read more.
Although within-cluster information is commonly used in most clustering approaches, other important information such as between-cluster information is rarely considered in some cases. Hence, in this study, we propose a new novel measure of between-cluster distance in subspace, which is to maximize the distance between the center of a cluster and the points that do not belong to this cluster. Based on this idea, we firstly design an optimization objective function integrating the between-cluster distance and entropy regularization in this paper. Then, updating rules are given by theoretical analysis. In the following, the properties of our proposed algorithm are investigated, and the performance is evaluated experimentally using two synthetic and seven real-life datasets. Finally, the experimental studies demonstrate that the results of the proposed algorithm (ERKM) outperform most existing state-of-the-art k-means-type clustering algorithms in most cases. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Statistical Mechanics-Based Schrödinger Treatment of Gravity
Entropy 2019, 21(7), 682; https://doi.org/10.3390/e21070682
Received: 22 April 2019 / Revised: 21 June 2019 / Accepted: 9 July 2019 / Published: 12 July 2019
Viewed by 175 | PDF Full-text (268 KB) | HTML Full-text | XML Full-text
Abstract
The entropic gravity conception proposes that what has been traditionally interpreted as unobserved dark matter might be merely the product of quantum effects. These effects would produce a novel sort of positive energy that translates into dark matter via E=mc [...] Read more.
The entropic gravity conception proposes that what has been traditionally interpreted as unobserved dark matter might be merely the product of quantum effects. These effects would produce a novel sort of positive energy that translates into dark matter via E = m c 2 . In the case of axions, this perspective has been shown to yield quite sensible, encouraging results [DOI:10.13140/RG.2.2.17894.88641]. Therein, a simple Schrödinger mechanism was utilized, in which his celebrated equation is solved with a potential function based on the microscopic Verlinde’s entropic force advanced in [Physica A 511 (2018) 139]. In this paper, we revisit this technique with regards to fermions’ behavior (specifically, baryons). Full article
(This article belongs to the Special Issue Entropy and Gravitation)
Figures

Figure 1

Open AccessArticle
Information Geometry of Spatially Periodic Stochastic Systems
Entropy 2019, 21(7), 681; https://doi.org/10.3390/e21070681
Received: 8 June 2019 / Revised: 4 July 2019 / Accepted: 10 July 2019 / Published: 12 July 2019
Viewed by 211 | PDF Full-text (590 KB) | HTML Full-text | XML Full-text
Abstract
We explore the effect of different spatially periodic, deterministic forces on the information geometry of stochastic processes. The three forces considered are f0=sin(πx)/π and f±=sin(πx)/π [...] Read more.
We explore the effect of different spatially periodic, deterministic forces on the information geometry of stochastic processes. The three forces considered are f 0 = sin ( π x ) / π and f ± = sin ( π x ) / π ± sin ( 2 π x ) / 2 π , with f - chosen to be particularly flat (locally cubic) at the equilibrium point x = 0 , and f + particularly flat at the unstable fixed point x = 1 . We numerically solve the Fokker–Planck equation with an initial condition consisting of a periodically repeated Gaussian peak centred at x = μ , with μ in the range [ 0 , 1 ] . The strength D of the stochastic noise is in the range 10 - 4 10 - 6 . We study the details of how these initial conditions evolve toward the final equilibrium solutions and elucidate the important consequences of the interplay between an initial PDF and a force. For initial positions close to the equilibrium point x = 0 , the peaks largely maintain their shape while moving. In contrast, for initial positions sufficiently close to the unstable point x = 1 , there is a tendency for the peak to slump in place and broaden considerably before reconstituting itself at the equilibrium point. A consequence of this is that the information length L , the total number of statistically distinguishable states that the system evolves through, is smaller for initial positions closer to the unstable point than for more intermediate values. We find that L as a function of initial position μ is qualitatively similar to the force, including the differences between f 0 = sin ( π x ) / π and f ± = sin ( π x ) / π ± sin ( 2 π x ) / 2 π , illustrating the value of information length as a useful diagnostic of the underlying force in the system. Full article
(This article belongs to the Special Issue Statistical Mechanics and Mathematical Physics)
Figures

Figure 1

Open AccessArticle
A Comprehensive Fault Diagnosis Method for Rolling Bearings Based on Refined Composite Multiscale Dispersion Entropy and Fast Ensemble Empirical Mode Decomposition
Entropy 2019, 21(7), 680; https://doi.org/10.3390/e21070680
Received: 20 June 2019 / Revised: 7 July 2019 / Accepted: 9 July 2019 / Published: 11 July 2019
Viewed by 231 | PDF Full-text (2510 KB) | HTML Full-text | XML Full-text
Abstract
This study presents a comprehensive fault diagnosis method for rolling bearings. The method includes two parts: the fault detection and the fault classification. In the stage of fault detection, a threshold based on refined composite multiscale dispersion entropy (RCMDE) at a local maximum [...] Read more.
This study presents a comprehensive fault diagnosis method for rolling bearings. The method includes two parts: the fault detection and the fault classification. In the stage of fault detection, a threshold based on refined composite multiscale dispersion entropy (RCMDE) at a local maximum scale is defined to judge the health state of rolling bearings. If the bearing is in fault, a generalized multi-scale feature extraction method is developed to fully extract fault information by combining fast ensemble empirical mode decomposition (FEEMD) and RCMDE. Firstly, the fault vibration signals are decomposed into a set of intrinsic mode functions (IMFs) by FEEMD. Secondly, the RCMDE value of multiple IMFs is calculated to generate a candidate feature pool. Then, the maximum-relevance and minimum-redundancy (mRMR) approach is employed to select the sensitive features from the candidate feature pool to construct the final feature vectors, and the final feature vectors are fed into random forest (RF) classifier to identify different fault working conditions. Finally, experiments and comparative research are carried out to verify the performance of the proposed method. The results show that the proposed method can detect faults effectively. Meanwhile, it has a more robust and excellent ability to identify different fault types and severity compared with other conventional approaches. Full article
(This article belongs to the Section Signal and Data Analysis)
Figures

Figure 1

Open AccessArticle
Time–Energy and Time–Entropy Uncertainty Relations in Nonequilibrium Quantum Thermodynamics under Steepest-Entropy-Ascent Nonlinear Master Equations
Entropy 2019, 21(7), 679; https://doi.org/10.3390/e21070679
Received: 13 June 2019 / Revised: 7 July 2019 / Accepted: 8 July 2019 / Published: 11 July 2019
Viewed by 215 | PDF Full-text (644 KB) | HTML Full-text | XML Full-text
Abstract
In the domain of nondissipative unitary Hamiltonian dynamics, the well-known Mandelstam–Tamm–Messiah time–energy uncertainty relation τFΔH/2 provides a general lower bound to the characteristic time τF=ΔF/|dF/ [...] Read more.
In the domain of nondissipative unitary Hamiltonian dynamics, the well-known Mandelstam–Tamm–Messiah time–energy uncertainty relation τ F Δ H / 2 provides a general lower bound to the characteristic time τ F = Δ F / | d F / d t | with which the mean value of a generic quantum observable F can change with respect to the width Δ F of its uncertainty distribution (square root of F fluctuations). A useful practical consequence is that in unitary dynamics the states with longer lifetimes are those with smaller energy uncertainty Δ H (square root of energy fluctuations). Here we show that when unitary evolution is complemented with a steepest-entropy-ascent model of dissipation, the resulting nonlinear master equation entails that these lower bounds get modified and depend also on the entropy uncertainty Δ S (square root of entropy fluctuations). For example, we obtain the time–energy-and–time–entropy uncertainty relation ( 2 τ F Δ H / ) 2 + ( τ F Δ S / k B τ ) 2 1 where τ is a characteristic dissipation time functional that for each given state defines the strength of the nonunitary, steepest-entropy-ascent part of the assumed master equation. For purely dissipative dynamics this reduces to the time–entropy uncertainty relation τ F Δ S k B τ , meaning that the nonequilibrium dissipative states with longer lifetime are those with smaller entropy uncertainty Δ S . Full article
(This article belongs to the Special Issue Entropy Production and Its Applications: From Cosmology to Biology)
Figures

Figure 1

Open AccessArticle
Coexisting Attractors and Multistability in a Simple Memristive Wien-Bridge Chaotic Circuit
Entropy 2019, 21(7), 678; https://doi.org/10.3390/e21070678
Received: 20 May 2019 / Revised: 6 July 2019 / Accepted: 6 July 2019 / Published: 11 July 2019
Viewed by 204 | PDF Full-text (12122 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new voltage-controlled memristor is presented. The mathematical expression of this memristor has an absolute value term, so it is called an absolute voltage-controlled memristor. The proposed memristor is locally active, which is proved by its DC VI [...] Read more.
In this paper, a new voltage-controlled memristor is presented. The mathematical expression of this memristor has an absolute value term, so it is called an absolute voltage-controlled memristor. The proposed memristor is locally active, which is proved by its DC VI (Voltage–Current) plot. A simple three-order Wien-bridge chaotic circuit without inductor is constructed on the basis of the presented memristor. The dynamical behaviors of the simple chaotic system are analyzed in this paper. The main properties of this system are coexisting attractors and multistability. Furthermore, an analog circuit of this chaotic system is realized by the Multisim software. The multistability of the proposed system can enlarge the key space in encryption, which makes the encryption effect better. Therefore, the proposed chaotic system can be used as a pseudo-random sequence generator to provide key sequences for digital encryption systems. Thus, the chaotic system is discretized and implemented by Digital Signal Processing (DSP) technology. The National Institute of Standards and Technology (NIST) test and Approximate Entropy analysis of the proposed chaotic system are conducted in this paper. Full article
Figures

Figure 1

Open AccessReview
A Review of the Classical Canonical Ensemble Treatment of Newton’s Gravitation
Entropy 2019, 21(7), 677; https://doi.org/10.3390/e21070677
Received: 9 May 2019 / Revised: 5 July 2019 / Accepted: 8 July 2019 / Published: 11 July 2019
Viewed by 189 | PDF Full-text (329 KB) | HTML Full-text | XML Full-text
Abstract
It is common lore that the canonical gravitational partition function Z associated with the classical Boltzmann-Gibbs (BG) exponential distribution cannot be built up because of mathematical pitfalls. The integral needed for writing up Z diverges. We review here how to avoid this pitfall [...] Read more.
It is common lore that the canonical gravitational partition function Z associated with the classical Boltzmann-Gibbs (BG) exponential distribution cannot be built up because of mathematical pitfalls. The integral needed for writing up Z diverges. We review here how to avoid this pitfall and obtain a (classical) statistical mechanics of Newton’s gravitation. This is done using (1) the analytical extension treatment obtained of Gradshteyn and Rizhik and (2) the well known dimensional regularization technique. Full article
(This article belongs to the Special Issue Entropy and Gravitation)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top