Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 8 (August 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) We apply Popcoen to various conformations of the Cas4 protein SSO0001 of Sulfolobus solfataricus, a [...] Read more.
View options order results:
result details:
Displaying articles 1-78
Export citation of selected articles as:
Open AccessArticle Approximate Bayesian Computation for Estimating Parameters of Data-Consistent Forbush Decrease Model
Entropy 2018, 20(8), 622; https://doi.org/10.3390/e20080622
Received: 20 July 2018 / Revised: 14 August 2018 / Accepted: 20 August 2018 / Published: 20 August 2018
PDF Full-text (2883 KB) | HTML Full-text | XML Full-text
Abstract
Realistic modeling of complex physical phenomena is always quite a challenging task. The main problem usually concerns the uncertainties surrounding model input parameters, especially when not all information about a modeled phenomenon is known. In such cases, Approximate Bayesian Computation (ABC) methodology may
[...] Read more.
Realistic modeling of complex physical phenomena is always quite a challenging task. The main problem usually concerns the uncertainties surrounding model input parameters, especially when not all information about a modeled phenomenon is known. In such cases, Approximate Bayesian Computation (ABC) methodology may be helpful. The ABC is based on a comparison of the model output data with the experimental data, to estimate the best set of input parameters of the particular model. In this paper, we present a framework applying the ABC methodology to estimate the parameters of the model of Forbush decrease (Fd) of the galactic cosmic ray intensity. The Fd is modeled by the numerical solution of the Fokker–Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The most problematic in Fd modeling is the lack of detailed knowledge about the spatial and temporal profiles of the parameters responsible for the creation of the Fd. Among these parameters, the diffusion coefficient plays a central role. We employ the ABC Sequential Monte Carlo algorithm, scanning the space of the diffusion coefficient parameters within the region of the heliosphere where the Fd is created. Assessment of the correctness of the proposed parameters is done by comparing the model output data with the experimental data of the galactic cosmic ray intensity. The particular attention is put on the rigidity dependence of the rigidity spectrum exponent. The proposed framework is adopted to create the model of the Fd observed by the neutron monitors and ground muon telescope in November 2004. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle The Maximum Entropy Method in Ultrasonic Non-Destructive Testing—Increasing the Resolution, Image Noise Reduction and Echo Acquisition Rate
Entropy 2018, 20(8), 621; https://doi.org/10.3390/e20080621
Received: 26 June 2018 / Revised: 27 July 2018 / Accepted: 17 August 2018 / Published: 20 August 2018
PDF Full-text (10074 KB) | HTML Full-text | XML Full-text
Abstract
The use of linear methods, for example, the Combined Synthetic Aperture Focusing Technique (C–SAFT), does not allow one to obtain images with high resolution and low noise, especially structural noise in all cases. Non-linear methods should improve the quality of the reconstructed image.
[...] Read more.
The use of linear methods, for example, the Combined Synthetic Aperture Focusing Technique (C–SAFT), does not allow one to obtain images with high resolution and low noise, especially structural noise in all cases. Non-linear methods should improve the quality of the reconstructed image. Several examples of the application of the maximum entropy (ME) method for ultrasonic echo processing in order to reconstruct the image of reflectors with Rayleigh super-resolution and a high signal-to-noise ratio are considered in the article. The use of the complex phase-shifted Barker code signal as a probe pulse and the compression of measured echoes by the ME method made it possible to increase the signal-to-noise ratio by more than 20 dB for the image of a flat-bottom hole with a diameter of 1 mm in a model experiment. A modification of the ME method for restoring the reflector image by the time-of-flight diffraction (TOFD) method is considered, taking into account the change of the echo signal shape, depending on the depth of the reflector. Using the ME method, 2.5D-images of models of dangling cracks in a pipeline with a diameter of 800 mm were obtained, which make it possible to determine their dimensions. In the object with structural noise, using the ME method, it was possible to increase the signal-to-noise ratio of the reflector image by more than 12 dB. To accelerate the acquisition of echoes in the dual scan mode, it is proposed to use code division multiple access (CDMA) technology based on simultaneous emission by all elements of the array of pseudo-orthogonal signals. The model experiment showed the effectiveness of applying the ME method. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle A Hybrid Structure Learning Algorithm for Bayesian Network Using Experts’ Knowledge
Entropy 2018, 20(8), 620; https://doi.org/10.3390/e20080620
Received: 27 June 2018 / Revised: 16 August 2018 / Accepted: 18 August 2018 / Published: 20 August 2018
PDF Full-text (5107 KB) | HTML Full-text | XML Full-text
Abstract
Bayesian network structure learning from data has been proved to be a NP-hard (Non-deterministic Polynomial-hard) problem. An effective method of improving the accuracy of Bayesian network structure is using experts’ knowledge instead of only using data. Some experts’ knowledge (named here explicit knowledge)
[...] Read more.
Bayesian network structure learning from data has been proved to be a NP-hard (Non-deterministic Polynomial-hard) problem. An effective method of improving the accuracy of Bayesian network structure is using experts’ knowledge instead of only using data. Some experts’ knowledge (named here explicit knowledge) can make the causal relationship between nodes in Bayesian Networks (BN) structure clear, while the others (named here vague knowledge) cannot. In the previous algorithms for BN structure learning, only the explicit knowledge was used, but the vague knowledge, which was ignored, is also valuable and often exists in the real world. Therefore we propose a new method of using more comprehensive experts’ knowledge based on hybrid structure learning algorithm, a kind of two-stage algorithm. Two types of experts’ knowledge are defined and incorporated into the hybrid algorithm. We formulate rules to generate better initial network structure and improve the scoring function. Furthermore, we take expert level difference and opinion conflict into account. Experimental results show that our proposed method can improve the structure learning performance. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessFeature PaperArticle A Classical Interpretation of the Scrooge Distribution
Entropy 2018, 20(8), 619; https://doi.org/10.3390/e20080619
Received: 25 June 2018 / Revised: 13 August 2018 / Accepted: 15 August 2018 / Published: 20 August 2018
PDF Full-text (292 KB) | HTML Full-text | XML Full-text
Abstract
The Scrooge distribution is a probability distribution over the set of pure states of a quantum system. Specifically, it is the distribution that, upon measurement, gives up the least information about the identity of the pure state compared with all other distributions that
[...] Read more.
The Scrooge distribution is a probability distribution over the set of pure states of a quantum system. Specifically, it is the distribution that, upon measurement, gives up the least information about the identity of the pure state compared with all other distributions that have the same density matrix. The Scrooge distribution has normally been regarded as a purely quantum mechanical concept with no natural classical interpretation. In this paper, we offer a classical interpretation of the Scrooge distribution viewed as a probability distribution over the probability simplex. We begin by considering a real-amplitude version of the Scrooge distribution for which we find that there is a non-trivial but natural classical interpretation. The transition to the complex-amplitude case requires a step that is not particularly natural but that may shed light on the relation between quantum mechanics and classical probability theory. Full article
Open AccessArticle Information Theory in Formation Control: An Error Analysis to Multi-Robot Formation
Entropy 2018, 20(8), 618; https://doi.org/10.3390/e20080618
Received: 2 July 2018 / Revised: 7 August 2018 / Accepted: 17 August 2018 / Published: 20 August 2018
PDF Full-text (452 KB) | HTML Full-text | XML Full-text
Abstract
Multi-robot formation control makes prerequisites for a team of robots to execute complex tasks cooperatively, which has been widely applied in both civilian and military scenarios. However, the limited precision of sensors and controllers may inevitably cause position errors in the finally achieved
[...] Read more.
Multi-robot formation control makes prerequisites for a team of robots to execute complex tasks cooperatively, which has been widely applied in both civilian and military scenarios. However, the limited precision of sensors and controllers may inevitably cause position errors in the finally achieved formation, which will affect the tasks undertaken. In this paper, the formation error is analyzed from the viewpoint of information theory. The desired position and the actually achieved position are viewed as two random variables. By calculating the mutual information between them, a lower bound of the formation error is derived. The results provide insights for the estimation of possible formation errors in the multi-robot system, which can assist designers to choose sensors and controllers with proper precision. Full article
Figures

Figure 1

Open AccessReview Entropy in Cell Biology: Information Thermodynamics of a Binary Code and Szilard Engine Chain Model of Signal Transduction
Entropy 2018, 20(8), 617; https://doi.org/10.3390/e20080617
Received: 24 June 2018 / Revised: 6 August 2018 / Accepted: 13 August 2018 / Published: 19 August 2018
PDF Full-text (1124 KB) | HTML Full-text | XML Full-text
Abstract
A model of signal transduction from the perspective of informational thermodynamics has been reported in recent studies, and several important achievements have been obtained. The first achievement is that signal transduction can be modelled as a binary code system, in which two forms
[...] Read more.
A model of signal transduction from the perspective of informational thermodynamics has been reported in recent studies, and several important achievements have been obtained. The first achievement is that signal transduction can be modelled as a binary code system, in which two forms of signalling molecules are utilised in individual steps. The second is that the average entropy production rate is consistent during the signal transduction cascade when the signal event number is maximised in the model. The third is that a Szilard engine can be a single-step model in the signal transduction. This article reviews these achievements and further introduces a new chain of Szilard engines as a biological reaction cascade (BRC) model. In conclusion, the presented model provides a way of computing the channel capacity of a BRC. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Dynamic Clustering and Coordinated User Scheduling for Cooperative Interference Cancellation on Ultra-High Density Distributed Antenna Systems
Entropy 2018, 20(8), 616; https://doi.org/10.3390/e20080616
Received: 31 July 2018 / Revised: 16 August 2018 / Accepted: 18 August 2018 / Published: 19 August 2018
PDF Full-text (1731 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes dynamic clustering and user scheduling for previously conceived inter-cluster interference cancellation scheme on ultra-high density distributed antenna system (UHD-DAS). UHD-DAS is composed of one central unit (CU) and densely deployed remote radio units (RUs) serving as small cell access points.
[...] Read more.
This paper proposes dynamic clustering and user scheduling for previously conceived inter-cluster interference cancellation scheme on ultra-high density distributed antenna system (UHD-DAS). UHD-DAS is composed of one central unit (CU) and densely deployed remote radio units (RUs) serving as small cell access points. It can enhance spatial spectral efficiency by alleviating traffic load imposed per radio unit; however, intenser small cell deployment revives the inter-cell interference (ICI) problem. Cell clustering, cooperation of multiple RUs, can mitigate ICI partially, whereas inter-cluster interference (ICLI) still limits its possible capacity. Simplified ICLI cancellation based on localized RU cooperation was previously proposed to mitigate interference globally. The resolved issue is that it required frequency reuse distance to fully obtain its interference cancellation ability. This paper introduces dynamic clustering with coordinated user scheduling to ensure reuse distance without extra frequency reuse. Joint dynamic clustering and ICLI cancellation can effectively work and almost reaches ideal performance as full cooperative spatial multiplexing transmission. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Second Law Analysis of Dissipative Flow over a Riga Plate with Non-Linear Rosseland Thermal Radiation and Variable Transport Properties
Entropy 2018, 20(8), 615; https://doi.org/10.3390/e20080615
Received: 17 July 2018 / Revised: 7 August 2018 / Accepted: 15 August 2018 / Published: 18 August 2018
PDF Full-text (7393 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we investigated entropy generation and heat transfer analysis in a viscous flow induced by a horizontally moving Riga plate in the presence of strong suction. The viscosity and thermal conductivity of the fluid are taken to be temperature dependent. The
[...] Read more.
In this article, we investigated entropy generation and heat transfer analysis in a viscous flow induced by a horizontally moving Riga plate in the presence of strong suction. The viscosity and thermal conductivity of the fluid are taken to be temperature dependent. The frictional heating function and non-linear radiation terms are also incorporated in the entropy generation and energy equation. The partial differential equations which model the flow are converted into dimensionless form by using proper transformations. Further, the dimensionless equations are reduced by imposing the conditions of strong suction. Numerical solutions are obtained using MATLAB boundary value solver bvp4c and used to evaluate the entropy generation number. The influences of physical flow parameters arise in the mathematical modeling are demonstrated through various graphs. The analysis reveals that velocity decays whereas entropy generation increases with rising values of variable viscosity parameter. Furthermore, entropy generation decays with increasing variable thermal conductivity parameter. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Figures

Figure 1

Open AccessArticle Study of Geo-Electric Data Collected by the Joint EMSEV-Bishkek RS-RAS Cooperation: Possible Earthquake Precursors
Entropy 2018, 20(8), 614; https://doi.org/10.3390/e20080614
Received: 3 July 2018 / Revised: 9 August 2018 / Accepted: 13 August 2018 / Published: 18 August 2018
PDF Full-text (2077 KB) | HTML Full-text | XML Full-text
Abstract
By employing the cross-correlogram method, in geo-electric data from the area of Kyrgyzstan for the period 30 June 2014–10 June 2015, we identified Anomalous Telluric Currents (ATC). From a total of 32 ATC after taking into consideration the electric current source properties, we
[...] Read more.
By employing the cross-correlogram method, in geo-electric data from the area of Kyrgyzstan for the period 30 June 2014–10 June 2015, we identified Anomalous Telluric Currents (ATC). From a total of 32 ATC after taking into consideration the electric current source properties, we found that three of them are possible Seismic Electric Signal (SES) activities. These three SES activities are likely to be linked with three local seismic events. Finally, by studying the corresponding recordings when a DC alternating source injects current into the Earth, we found that the subsurface resistivity seems to be reduced before one of these three earthquakes, but a similar analysis for the other two cannot be done due to their large epicentral distance and the lack of data. Full article
Figures

Figure 1

Open AccessArticle Time-Dependent Probability Density Functions and Attractor Structure in Self-Organised Shear Flows
Entropy 2018, 20(8), 613; https://doi.org/10.3390/e20080613
Received: 30 July 2018 / Revised: 9 August 2018 / Accepted: 16 August 2018 / Published: 17 August 2018
PDF Full-text (916 KB) | HTML Full-text | XML Full-text
Abstract
We report the time-evolution of Probability Density Functions (PDFs) in a toy model of self-organised shear flows, where the formation of shear flows is induced by a finite memory time of a stochastic forcing, manifested by the emergence of a bimodal PDF with
[...] Read more.
We report the time-evolution of Probability Density Functions (PDFs) in a toy model of self-organised shear flows, where the formation of shear flows is induced by a finite memory time of a stochastic forcing, manifested by the emergence of a bimodal PDF with the two peaks representing non-zero mean values of a shear flow. Using theoretical analyses of limiting cases, as well as numerical solutions of the full Fokker–Planck equation, we present a thorough parameter study of PDFs for different values of the correlation time and amplitude of stochastic forcing. From time-dependent PDFs, we calculate the information length ( L ), which is the total number of statistically different states that a system passes through in time and utilise it to understand the information geometry associated with the formation of bimodal or unimodal PDFs. We identify the difference between the relaxation and build-up of the shear gradient in view of information change and discuss the total information length ( L = L ( t ) ) which maps out the underlying attractor structures, highlighting a unique property of L which depends on the trajectory/history of a PDF’s evolution. Full article
Figures

Figure 1

Open AccessArticle Permutation Entropy Based on Non-Uniform Embedding
Entropy 2018, 20(8), 612; https://doi.org/10.3390/e20080612
Received: 11 July 2018 / Revised: 7 August 2018 / Accepted: 8 August 2018 / Published: 17 August 2018
PDF Full-text (1896 KB) | HTML Full-text | XML Full-text
Abstract
A novel visualization scheme for permutation entropy is presented in this paper. The proposed scheme is based on non-uniform attractor embedding of the investigated time series. A single digital image of permutation entropy is produced by averaging all possible plain projections of the
[...] Read more.
A novel visualization scheme for permutation entropy is presented in this paper. The proposed scheme is based on non-uniform attractor embedding of the investigated time series. A single digital image of permutation entropy is produced by averaging all possible plain projections of the permutation entropy measure in the multi-dimensional delay coordinate space. Computational experiments with artificially-generated and real-world time series are used to demonstrate the advantages of the proposed visualization scheme. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Multi-Fault Diagnosis of Gearbox Based on Improved Multipoint Optimal Minimum Entropy Deconvolution
Entropy 2018, 20(8), 611; https://doi.org/10.3390/e20080611
Received: 5 July 2018 / Revised: 26 July 2018 / Accepted: 9 August 2018 / Published: 17 August 2018
PDF Full-text (8768 KB) | HTML Full-text | XML Full-text
Abstract
Under complicated conditions, the extraction of a multi-fault in gearboxes is difficult to achieve. Due to improper selection of methods, leakage diagnosis or misdiagnosis will usually occur. Ensemble Empirical Mode Decomposition (EEMD) often causes energy leakage due to improper selection of white noise
[...] Read more.
Under complicated conditions, the extraction of a multi-fault in gearboxes is difficult to achieve. Due to improper selection of methods, leakage diagnosis or misdiagnosis will usually occur. Ensemble Empirical Mode Decomposition (EEMD) often causes energy leakage due to improper selection of white noise during signal decomposition. Considering that only a single fault cycle can be extracted when MOMED (Multipoint Optimal Minimum Entropy Deconvolution) is used, it is necessary to perform the sub-band processing of the compound fault signal. This paper presents an adaptive gearbox multi-fault-feature extraction method based on Improved MOMED (IMOMED). Firstly, EEMD decomposes the signal adaptively and selects the intrinsic mode functions with strong correlation with the original signal to perform FFT (Fast Fourier transform); considering the mode-mixing phenomenon of EEMD, reconstruct the intrinsic mode functions with the same timescale, and obtain several intrinsic mode functions of the same scale to improve the entropy of fault features. There is a lot of white noise in the original signal, and EEMD can improve the signal-to-noise ratio of the original signal. Finally, through the setting of different noise-reduction intervals to extract fault features through MOMED. The proposed method is compared with EEMD and VMD (Variational Mode Decomposition) to verify its feasibility. Full article
Figures

Figure 1

Open AccessArticle SU(2) Decomposition for the Quantum Information Dynamics in 2d-Partite Two-Level Quantum Systems
Entropy 2018, 20(8), 610; https://doi.org/10.3390/e20080610
Received: 1 June 2018 / Revised: 31 July 2018 / Accepted: 2 August 2018 / Published: 17 August 2018
Cited by 1 | PDF Full-text (3102 KB) | HTML Full-text | XML Full-text
Abstract
The gate array version of quantum computation uses logical gates adopting convenient forms for computational algorithms based on the algorithms classical computation. Two-level quantum systems are the basic elements connecting the binary nature of classical computation with the settlement of quantum processing. Despite
[...] Read more.
The gate array version of quantum computation uses logical gates adopting convenient forms for computational algorithms based on the algorithms classical computation. Two-level quantum systems are the basic elements connecting the binary nature of classical computation with the settlement of quantum processing. Despite this, their design depends on specific quantum systems and the physical interactions involved, thus complicating the dynamics analysis. Predictable and controllable manipulation should be addressed in order to control the quantum states in terms of the physical control parameters. Resources are restricted to limitations imposed by the physical settlement. This work presents a formalism to decompose the quantum information dynamics in S U ( 2 2 d ) for 2 d -partite two-level systems into 2 2 d 1 S U ( 2 ) quantum subsystems. It generates an easier and more direct physical implementation of quantum processing developments for qubits. Easy and traditional operations proposed by quantum computation are recovered for larger and more complex systems. Alternating the parameters of local and non-local interactions, the procedure states a universal exchange semantics on the basis of generalized Bell states. Although the main procedure could still be settled on other interaction architectures by the proper selection of the basis as natural grammar, the procedure can be understood as a momentary splitting of the 2 d information channels into 2 2 d 1 pairs of 2 level quantum information subsystems. Additionally, it is a settlement of the quantum information manipulation that is free of the restrictions imposed by the underlying physical system. Thus, the motivation of decomposition is to set control procedures easily in order to generate large entangled states and to design specialized dedicated quantum gates. They are potential applications that properly bypass the general induced superposition generated by physical dynamics. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Figures

Figure 1

Open AccessArticle Information Geometry of Randomized Quantum State Tomography
Entropy 2018, 20(8), 609; https://doi.org/10.3390/e20080609
Received: 29 June 2018 / Revised: 5 August 2018 / Accepted: 13 August 2018 / Published: 16 August 2018
PDF Full-text (559 KB) | HTML Full-text | XML Full-text
Abstract
Suppose that a d-dimensional Hilbert space HCd admits a full set of mutually unbiased bases |1(a),,|d(a), where a=1,,d
[...] Read more.
Suppose that a d-dimensional Hilbert space H C d admits a full set of mutually unbiased bases | 1 ( a ) , , | d ( a ) , where a = 1 , , d + 1 . A randomized quantum state tomography is a scheme for estimating an unknown quantum state on H through iterative applications of measurements M ( a ) = | 1 ( a ) 1 ( a ) | , , | d ( a ) d ( a ) | for a = 1 , , d + 1 , where the numbers of applications of these measurements are random variables. We show that the space of the resulting probability distributions enjoys a mutually orthogonal dualistic foliation structure, which provides us with a simple geometrical insight into the maximum likelihood method for the quantum state tomography. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle New Estimations for Shannon and Zipf–Mandelbrot Entropies
Entropy 2018, 20(8), 608; https://doi.org/10.3390/e20080608
Received: 9 July 2018 / Revised: 8 August 2018 / Accepted: 14 August 2018 / Published: 16 August 2018
PDF Full-text (242 KB) | HTML Full-text | XML Full-text
Abstract
The main purpose of this paper is to find new estimations for the Shannon and Zipf–Mandelbrot entropies. We apply some refinements of the Jensen inequality to obtain different bounds for these entropies. Initially, we use a precise convex function in the refinement of
[...] Read more.
The main purpose of this paper is to find new estimations for the Shannon and Zipf–Mandelbrot entropies. We apply some refinements of the Jensen inequality to obtain different bounds for these entropies. Initially, we use a precise convex function in the refinement of the Jensen inequality and then tamper the weight and domain of the function to obtain general bounds for the Shannon entropy (SE). As particular cases of these general bounds, we derive some bounds for the Shannon entropy (SE) which are, in fact, the applications of some other well-known refinements of the Jensen inequality. Finally, we derive different estimations for the Zipf–Mandelbrot entropy (ZME) by using the new bounds of the Shannon entropy for the Zipf–Mandelbrot law (ZML). We also discuss particular cases and the bounds related to two different parametrics of the Zipf–Mandelbrot entropy. At the end of the paper we give some applications in linguistics. Full article
Open AccessArticle Quantum Quantifiers for an Atom System Interacting with a Quantum Field Based on Pseudoharmonic Oscillator States
Entropy 2018, 20(8), 607; https://doi.org/10.3390/e20080607
Received: 9 June 2018 / Revised: 17 July 2018 / Accepted: 2 August 2018 / Published: 16 August 2018
PDF Full-text (523 KB) | HTML Full-text | XML Full-text
Abstract
We develop a useful model considering an atom-field system interaction in the framework of pseudoharmonic oscillators. We examine qualitatively the different physical quantities for a two-level atom (TLA) system interacting with a quantized coherent field in the context of photon-added coherent states of
[...] Read more.
We develop a useful model considering an atom-field system interaction in the framework of pseudoharmonic oscillators. We examine qualitatively the different physical quantities for a two-level atom (TLA) system interacting with a quantized coherent field in the context of photon-added coherent states of pseudoharmonic oscillators. Using these coherent states, we solve the model that exhibits the interaction between the TLA and field associated with these kinds of potentials. We analyze the temporal evolution of the entanglement, statistical properties, geometric phase and squeezing entropies. Finally, we show the relationship between the physical quantities and their dynamics in terms of the physical parameters. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Figures

Figure 1

Open AccessArticle Non-Local Parity Measurements and the Quantum Pigeonhole Effect
Entropy 2018, 20(8), 606; https://doi.org/10.3390/e20080606
Received: 5 June 2018 / Revised: 6 August 2018 / Accepted: 8 August 2018 / Published: 16 August 2018
PDF Full-text (211 KB) | HTML Full-text | XML Full-text
Abstract
The pigeonhole principle upholds the idea that by ascribing to three different particles either one of two properties, we necessarily end up in a situation when at least two of the particles have the same property. In quantum physics, this principle is violated
[...] Read more.
The pigeonhole principle upholds the idea that by ascribing to three different particles either one of two properties, we necessarily end up in a situation when at least two of the particles have the same property. In quantum physics, this principle is violated in experiments involving postselection of the particles in appropriately-chosen states. Here, we give two explicit constructions using standard gates and measurements that illustrate this fact. Intriguingly, the procedures described are manifestly non-local, which demonstrates that the correlations needed to observe the violation of this principle can be created without direct interactions between particles. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessArticle A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity
Entropy 2018, 20(8), 605; https://doi.org/10.3390/e20080605
Received: 28 April 2018 / Revised: 18 June 2018 / Accepted: 31 July 2018 / Published: 15 August 2018
PDF Full-text (2053 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous
[...] Read more.
We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Analog Circuit Fault Diagnosis via Joint Cross-Wavelet Singular Entropy and Parametric t-SNE
Entropy 2018, 20(8), 604; https://doi.org/10.3390/e20080604
Received: 31 May 2018 / Revised: 24 July 2018 / Accepted: 25 July 2018 / Published: 14 August 2018
PDF Full-text (890 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel method with cross-wavelet singular entropy (XWSE)-based feature extractor and support vector machine (SVM) is proposed for analog circuit fault diagnosis. Primarily, cross-wavelet transform (XWT), which possesses a good capability to restrain the environment noise, is applied to transform
[...] Read more.
In this paper, a novel method with cross-wavelet singular entropy (XWSE)-based feature extractor and support vector machine (SVM) is proposed for analog circuit fault diagnosis. Primarily, cross-wavelet transform (XWT), which possesses a good capability to restrain the environment noise, is applied to transform the fault signal into time-frequency spectra (TFS). Then, a simple segmentation method is utilized to decompose the TFS into several blocks. We employ the singular value decomposition (SVD) to analysis the blocks, then Tsallis entropy of each block is obtained to construct the original features. Subsequently, the features are imported into parametric t-distributed stochastic neighbor embedding (t-SNE) for dimension reduction to yield the discriminative and concise fault characteristics. Finally, the fault characteristics are entered into SVM classifier to locate circuits’ defects that the free parameters of SVM are determined by quantum-behaved particle swarm optimization (QPSO). Simulation results show the proposed approach is with superior diagnostic performance than other existing methods. Full article
Figures

Figure 1

Open AccessArticle Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
Entropy 2018, 20(8), 603; https://doi.org/10.3390/e20080603
Received: 26 June 2018 / Revised: 8 August 2018 / Accepted: 11 August 2018 / Published: 13 August 2018
PDF Full-text (543 KB) | HTML Full-text | XML Full-text
Abstract
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space
[...] Read more.
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space serves as the starting point of this approach; however, our effort goes significantly beyond using it to prove information inequalities. We first identify and formalize the symmetry structure in the problem, which enables us to show the existence of optimal symmetric solutions. A symmetry-reduced linear program is then used to identify the boundary of the memory-transmission-rate tradeoff for several small cases, for which we obtain a set of tight outer bounds. General hypotheses on the optimal tradeoff region are formed from these computed data, which are then analytically proven. This leads to a complete characterization of the optimal tradeoff for systems with only two users, and certain partial characterization for systems with only two files. Next, we show that by carefully analyzing the joint entropy structure of the outer bounds for certain cases, a novel code construction can be reverse-engineered, which eventually leads to a general class of codes. Finally, we show that outer bounds can be computed through strategically relaxing the LP in different ways, which can be used to explore the problem computationally. This allows us firstly to deduce generic characteristic of the converse proof, and secondly to compute outer bounds for larger problem cases, despite the seemingly impossible computation scale. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Figures

Figure 1

Open AccessArticle Time-Shift Multiscale Fuzzy Entropy and Laplacian Support Vector Machine Based Rolling Bearing Fault Diagnosis
Entropy 2018, 20(8), 602; https://doi.org/10.3390/e20080602
Received: 24 July 2018 / Revised: 7 August 2018 / Accepted: 9 August 2018 / Published: 13 August 2018
PDF Full-text (2428 KB) | HTML Full-text | XML Full-text
Abstract
Multiscale entropy (MSE), as a complexity measurement method of time series, has been widely used to extract the fault information hidden in machinery vibration signals. However, the insufficient coarse graining in MSE will result in fault pattern information missing and the sample entropy
[...] Read more.
Multiscale entropy (MSE), as a complexity measurement method of time series, has been widely used to extract the fault information hidden in machinery vibration signals. However, the insufficient coarse graining in MSE will result in fault pattern information missing and the sample entropy used in MSE at larger factors will fluctuate heavily. Combining fractal theory and fuzzy entropy, the time shift multiscale fuzzy entropy (TSMFE) is put forward and applied to the complexity analysis of time series for enhancing the performance of MSE. Then TSMFE is used to extract the nonlinear fault features from vibration signals of rolling bearing. By combining TSMFE with the Laplacian support vector machine (LapSVM), which only needs very few marked samples for classification training, a new intelligent fault diagnosis method for rolling bearing is proposed. Also the proposed method is applied to the experiment data analysis of rolling bearing by comparing with the existing methods and the analysis results show that the proposed fault diagnosis method can effectively identify different states of rolling bearing and get the highest recognition rate among the existing methods. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle A Maximum-Entropy Method to Estimate Discrete Distributions from Samples Ensuring Nonzero Probabilities
Entropy 2018, 20(8), 601; https://doi.org/10.3390/e20080601
Received: 18 July 2018 / Revised: 9 August 2018 / Accepted: 13 August 2018 / Published: 13 August 2018
PDF Full-text (1471 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
When constructing discrete (binned) distributions from samples of a data set, applications exist where it is desirable to assure that all bins of the sample distribution have nonzero probability. For example, if the sample distribution is part of a predictive model for which
[...] Read more.
When constructing discrete (binned) distributions from samples of a data set, applications exist where it is desirable to assure that all bins of the sample distribution have nonzero probability. For example, if the sample distribution is part of a predictive model for which we require returning a response for the entire codomain, or if we use Kullback–Leibler divergence to measure the (dis-)agreement of the sample distribution and the original distribution of the variable, which, in the described case, is inconveniently infinite. Several sample-based distribution estimators exist which assure nonzero bin probability, such as adding one counter to each zero-probability bin of the sample histogram, adding a small probability to the sample pdf, smoothing methods such as Kernel-density smoothing, or Bayesian approaches based on the Dirichlet and Multinomial distribution. Here, we suggest and test an approach based on the Clopper–Pearson method, which makes use of the binominal distribution. Based on the sample distribution, confidence intervals for bin-occupation probability are calculated. The mean of each confidence interval is a strictly positive estimator of the true bin-occupation probability and is convergent with increasing sample size. For small samples, it converges towards a uniform distribution, i.e., the method effectively applies a maximum entropy approach. We apply this nonzero method and four alternative sample-based distribution estimators to a range of typical distributions (uniform, Dirac, normal, multimodal, and irregular) and measure the effect with Kullback–Leibler divergence. While the performance of each method strongly depends on the distribution type it is applied to, on average, and especially for small sample sizes, the nonzero, the simple “add one counter”, and the Bayesian Dirichlet-multinomial model show very similar behavior and perform best. We conclude that, when estimating distributions without an a priori idea of their shape, applying one of these methods is favorable. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Identity Vector Extraction by Perceptual Wavelet Packet Entropy and Convolutional Neural Network for Voice Authentication
Entropy 2018, 20(8), 600; https://doi.org/10.3390/e20080600
Received: 25 June 2018 / Revised: 2 August 2018 / Accepted: 9 August 2018 / Published: 13 August 2018
PDF Full-text (2795 KB) | HTML Full-text | XML Full-text
Abstract
Recently, the accuracy of voice authentication system has increased significantly due to the successful application of the identity vector (i-vector) model. This paper proposes a new method for i-vector extraction. In the method, a perceptual wavelet packet transform (PWPT) is designed to convert
[...] Read more.
Recently, the accuracy of voice authentication system has increased significantly due to the successful application of the identity vector (i-vector) model. This paper proposes a new method for i-vector extraction. In the method, a perceptual wavelet packet transform (PWPT) is designed to convert speech utterances into wavelet entropy feature vectors, and a Convolutional Neural Network (CNN) is designed to estimate the frame posteriors of the wavelet entropy feature vectors. In the end, i-vector is extracted based on those frame posteriors. TIMIT and VoxCeleb speech corpus are used for experiments and the experimental results show that the proposed method can extract appropriate i-vector which reduces the equal error rate (EER) and improve the accuracy of voice authentication system in clean and noisy environment. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Intrinsic Computation of a Monod-Wyman-Changeux Molecule
Entropy 2018, 20(8), 599; https://doi.org/10.3390/e20080599
Received: 12 July 2018 / Revised: 6 August 2018 / Accepted: 10 August 2018 / Published: 11 August 2018
PDF Full-text (1099 KB) | HTML Full-text | XML Full-text
Abstract
Causal states are minimal sufficient statistics of prediction of a stochastic process, their coding cost is called statistical complexity, and the implied causal structure yields a sense of the process’ “intrinsic computation”. We discuss how statistical complexity changes with slight changes to the
[...] Read more.
Causal states are minimal sufficient statistics of prediction of a stochastic process, their coding cost is called statistical complexity, and the implied causal structure yields a sense of the process’ “intrinsic computation”. We discuss how statistical complexity changes with slight changes to the underlying model– in this case, a biologically-motivated dynamical model, that of a Monod-Wyman-Changeux molecule. Perturbations to kinetic rates cause statistical complexity to jump from finite to infinite. The same is not true for excess entropy, the mutual information between past and future, or for the molecule’s transfer function. We discuss the implications of this for the relationship between intrinsic and functional computation of biological sensory systems. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Figures

Figure 1

Open AccessEditorial Entropy Applications in Environmental and Water Engineering
Entropy 2018, 20(8), 598; https://doi.org/10.3390/e20080598
Received: 9 August 2018 / Accepted: 9 August 2018 / Published: 10 August 2018
PDF Full-text (174 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Open AccessEditorial Work Availability and Exergy Analysis
Entropy 2018, 20(8), 597; https://doi.org/10.3390/e20080597
Received: 6 August 2018 / Accepted: 6 August 2018 / Published: 10 August 2018
PDF Full-text (181 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Open AccessArticle Gradient and GENERIC Systems in the Space of Fluxes, Applied to Reacting Particle Systems
Entropy 2018, 20(8), 596; https://doi.org/10.3390/e20080596
Received: 2 July 2018 / Revised: 2 August 2018 / Accepted: 8 August 2018 / Published: 9 August 2018
PDF Full-text (442 KB) | HTML Full-text | XML Full-text
Abstract
In a previous work we devised a framework to derive generalised gradient systems for an evolution equation from the large deviations of an underlying microscopic system, in the spirit of the Onsager–Machlup relations. Of particular interest is the case where the microscopic system
[...] Read more.
In a previous work we devised a framework to derive generalised gradient systems for an evolution equation from the large deviations of an underlying microscopic system, in the spirit of the Onsager–Machlup relations. Of particular interest is the case where the microscopic system consists of random particles, and the macroscopic quantity is the empirical measure or concentration. In this work we take the particle flux as the macroscopic quantity, which is related to the concentration via a continuity equation. By a similar argument the large deviations can induce a generalised gradient or GENERIC system in the space of fluxes. In a general setting we study how flux gradient or GENERIC systems are related to gradient systems of concentrations. This shows that many gradient or GENERIC systems arise from an underlying gradient or GENERIC system where fluxes rather than densities are being driven by (free) energies. The arguments are explained by the example of reacting particle systems, which is later expanded to include spatial diffusion as well. Full article
Figures

Figure 1

Open AccessArticle Thermodynamic Analysis of Irreversible Desiccant Systems
Entropy 2018, 20(8), 595; https://doi.org/10.3390/e20080595
Received: 8 June 2018 / Revised: 2 August 2018 / Accepted: 6 August 2018 / Published: 9 August 2018
PDF Full-text (1900 KB) | HTML Full-text | XML Full-text
Abstract
A new general thermodynamic mapping of desiccant systems’ performance is conducted to estimate the potentiality and determine the proper application field of the technology. This targets certain room conditions and given outdoor temperature and humidity prior to the selection of the specific desiccant
[...] Read more.
A new general thermodynamic mapping of desiccant systems’ performance is conducted to estimate the potentiality and determine the proper application field of the technology. This targets certain room conditions and given outdoor temperature and humidity prior to the selection of the specific desiccant material and technical details of the system configuration. This allows the choice of the operative state of the system to be independent from the limitations of the specific design and working fluid. An expression of the entropy balance suitable for describing the operability of a desiccant system at steady state is obtained by applying a control volume approach, defining sensible and latent effectiveness parameters, and assuming ideal gas behaviour of the air-vapour mixture. This formulation, together with mass and energy balances, is used to conduct a general screening of the system performance. The theoretical advantage and limitation of desiccant dehumidification air conditioning, maximum efficiency for given conditions constraints, least irreversible configuration for a given operative target, and characteristics of the system for a target efficiency can be obtained from this thermodynamic mapping. Once the thermo-physical properties and the thermodynamic equilibrium relationship of the liquid desiccant mixture or solid coating material are known, this method can be applied to a specific technical case to select the most appropriate working medium and guide the specific system design to achieve the target performance. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Bayesian Optimization Based on K-Optimality
Entropy 2018, 20(8), 594; https://doi.org/10.3390/e20080594
Received: 11 July 2018 / Revised: 4 August 2018 / Accepted: 8 August 2018 / Published: 9 August 2018
PDF Full-text (711 KB) | HTML Full-text | XML Full-text
Abstract
Bayesian optimization (BO) based on the Gaussian process (GP) surrogate model has attracted extensive attention in the field of optimization and design of experiments (DoE). It usually faces two problems: the unstable GP prediction due to the ill-conditioned Gram matrix of the kernel
[...] Read more.
Bayesian optimization (BO) based on the Gaussian process (GP) surrogate model has attracted extensive attention in the field of optimization and design of experiments (DoE). It usually faces two problems: the unstable GP prediction due to the ill-conditioned Gram matrix of the kernel and the difficulty of determining the trade-off parameter between exploitation and exploration. To solve these problems, we investigate the K-optimality, aiming at minimizing the condition number. Firstly, the Sequentially Bayesian K-optimal design (SBKO) is proposed to ensure the stability of the GP prediction, where the K-optimality is given as the acquisition function. We show that the SBKO reduces the integrated posterior variance and maximizes the hyper-parameters’ information gain simultaneously. Secondly, a K-optimal enhanced Bayesian Optimization (KO-BO) approach is given for the optimization problems, where the K-optimality is used to define the trade-off balance parameters which can be output automatically. Specifically, we focus our study on the K-optimal enhanced Expected Improvement algorithm (KO-EI). Numerical examples show that the SBKO generally outperforms the Monte Carlo, Latin hypercube sampling, and sequential DoE approaches by maximizing the posterior variance with the highest precision of prediction. Furthermore, the study of the optimization problem shows that the KO-EI method beats the classical EI method due to its higher convergence rate and smaller variance. Full article
Figures

Figure 1

Open AccessArticle The Duality of Entropy/Extropy, and Completion of the Kullback Information Complex
Entropy 2018, 20(8), 593; https://doi.org/10.3390/e20080593
Received: 3 July 2018 / Revised: 26 July 2018 / Accepted: 6 August 2018 / Published: 9 August 2018
PDF Full-text (1095 KB) | HTML Full-text | XML Full-text
Abstract
The refinement axiom for entropy has been provocative in providing foundations of information theory, recognised as thoughtworthy in the writings of both Shannon and Jaynes. A resolution to their concerns has been provided recently by the discovery that the entropy measure of a
[...] Read more.
The refinement axiom for entropy has been provocative in providing foundations of information theory, recognised as thoughtworthy in the writings of both Shannon and Jaynes. A resolution to their concerns has been provided recently by the discovery that the entropy measure of a probability distribution has a dual measure, a complementary companion designated as “extropy”. We report here the main results that identify this fact, specifying the dual equations and exhibiting some of their structure. The duality extends beyond a simple assessment of entropy, to the formulation of relative entropy and the Kullback symmetric distance between two forecasting distributions. This is defined by the sum of a pair of directed divergences. Examining the defining equation, we notice that this symmetric measure can be generated by two other explicable pairs of functions as well, neither of which is a Bregman divergence. The Kullback information complex is constituted by the symmetric measure of entropy/extropy along with one of each of these three function pairs. It is intimately related to the total logarithmic score of two distinct forecasting distributions for a quantity under consideration, this being a complete proper score. The information complex is isomorphic to the expectations that the two forecasting distributions assess for their achieved scores, each for its own score and for the score achieved by the other. Analysis of the scoring problem exposes a Pareto optimal exchange of the forecasters’ scores that both are willing to engage. Both would support its evaluation for assessing the relative quality of the information they provide regarding the observation of an unknown quantity of interest. We present our results without proofs, as these appear in source articles that are referenced. The focus here is on their content, unhindered. The mathematical syntax of probability we employ relies upon the operational subjective constructions of Bruno de Finetti. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Back to Top