Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 12 (December 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Ion-exchange membranes are applied in reverse electrodialysis, a renewable energy technology that [...] Read more.
View options order results:
result details:
Displaying articles 1-94
Export citation of selected articles as:
Open AccessReply Reply to the Comments on: Tian Zhao et al. The Principle of Least Action for Reversible Thermodynamic Processes and Cycles. Entropy 2018, 20, 542
Entropy 2018, 20(12), 986; https://doi.org/10.3390/e20120986 (registering DOI)
Received: 13 December 2018 / Accepted: 14 December 2018 / Published: 18 December 2018
PDF Full-text (162 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this reply is to provide a discussion and closure for the comment paper by Dr. Bormashenko on the present authors’ article, which discussed the application of the principle of least action in reversible thermodynamic processes and cycles. Dr. Bormashenko’s questions
[...] Read more.
The purpose of this reply is to provide a discussion and closure for the comment paper by Dr. Bormashenko on the present authors’ article, which discussed the application of the principle of least action in reversible thermodynamic processes and cycles. Dr. Bormashenko’s questions and misunderstandings are responded to, and the differences between the present authors’ work and Lucia’s are also presented. Full article
(This article belongs to the Section Thermodynamics)
Open AccessArticle A Simple Thermodynamic Model of the Internal Convective Zone of the Earth
Entropy 2018, 20(12), 985; https://doi.org/10.3390/e20120985 (registering DOI)
Received: 29 November 2018 / Revised: 8 December 2018 / Accepted: 14 December 2018 / Published: 18 December 2018
PDF Full-text (465 KB) | HTML Full-text | XML Full-text
Abstract
As it is well known both atmospheric and mantle convection are very complex phenomena. The dynamical description of these processes is a very difficult task involving complicated 2-D or 3-D mathematical models. However, a first approximation to these phenomena can be by means
[...] Read more.
As it is well known both atmospheric and mantle convection are very complex phenomena. The dynamical description of these processes is a very difficult task involving complicated 2-D or 3-D mathematical models. However, a first approximation to these phenomena can be by means of simplified thermodynamic models where the restriction imposed by the laws of thermodynamics play an important role. An example of this approach is the model proposed by Gordon and Zarmi in 1989 to emulate the convective cells of the atmospheric air by using finite-time thermodynamics (FTT). In the present article we use the FTT Gordon-Zarmi model to coarsely describe the convection in the Earth’s mantle. Our results permit the existence of two layers of convective cells along the mantle. Besides the model reasonably reproduce the temperatures of the main discontinuities in the mantle, such as the 410 km-discontinuity, the Repetti transition zone and the so-called D-Layer. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Figures

Figure 1

Open AccessArticle A Comprehensive Evaluation of Graph Kernels for Unattributed Graphs
Entropy 2018, 20(12), 984; https://doi.org/10.3390/e20120984
Received: 25 September 2018 / Revised: 16 December 2018 / Accepted: 16 December 2018 / Published: 18 December 2018
PDF Full-text (1885 KB) | HTML Full-text | XML Full-text
Abstract
Graph kernels are of vital importance in the field of graph comparison and classification. However, how to compare and evaluate graph kernels and how to choose an optimal kernel for a practical classification problem remain open problems. In this paper, a comprehensive evaluation
[...] Read more.
Graph kernels are of vital importance in the field of graph comparison and classification. However, how to compare and evaluate graph kernels and how to choose an optimal kernel for a practical classification problem remain open problems. In this paper, a comprehensive evaluation framework of graph kernels is proposed for unattributed graph classification. According to the kernel design methods, the whole graph kernel family can be categorized in five different dimensions, and then several representative graph kernels are chosen from these categories to perform the evaluation. With plenty of real-world and synthetic datasets, kernels are compared by many criteria such as classification accuracy, F1 score, runtime cost, scalability and applicability. Finally, quantitative conclusions are discussed based on the analyses of the extensive experimental results. The main contribution of this paper is that a comprehensive evaluation framework of graph kernels is proposed, which is significant for graph-classification applications and the future kernel research. Full article
Figures

Figure 1

Open AccessArticle Approximation to Hadamard Derivative via the Finite Part Integral
Entropy 2018, 20(12), 983; https://doi.org/10.3390/e20120983
Received: 8 October 2018 / Revised: 7 December 2018 / Accepted: 14 December 2018 / Published: 18 December 2018
PDF Full-text (862 KB) | HTML Full-text | XML Full-text
Abstract
In 1923, Hadamard encountered a class of integrals with strong singularities when using a particular Green’s function to solve the cylindrical wave equation. He ignored the infinite parts of such integrals after integrating by parts. Such an idea is very practical and useful
[...] Read more.
In 1923, Hadamard encountered a class of integrals with strong singularities when using a particular Green’s function to solve the cylindrical wave equation. He ignored the infinite parts of such integrals after integrating by parts. Such an idea is very practical and useful in many physical models, e.g., the crack problems of both planar and three-dimensional elasticities. In this paper, we present the rectangular and trapezoidal formulas to approximate the Hadamard derivative by the idea of the finite part integral. Then, we apply the proposed numerical methods to the differential equation with the Hadamard derivative. Finally, several numerical examples are displayed to show the effectiveness of the basic idea and technique. Full article
(This article belongs to the Special Issue The Fractional View of Complexity)
Figures

Figure 1

Open AccessArticle AD or Non-AD: A Deep Learning Approach to Detect Advertisements from Magazines
Entropy 2018, 20(12), 982; https://doi.org/10.3390/e20120982
Received: 4 December 2018 / Revised: 13 December 2018 / Accepted: 16 December 2018 / Published: 17 December 2018
PDF Full-text (1001 KB)
Abstract
The processing and analyzing of multimedia data has become a popular research topic due to the evolution of deep learning. Deep learning has played an important role in addressing many challenging problems, such as computer vision, image recognition, and image detection, which can
[...] Read more.
The processing and analyzing of multimedia data has become a popular research topic due to the evolution of deep learning. Deep learning has played an important role in addressing many challenging problems, such as computer vision, image recognition, and image detection, which can be useful in many real-world applications. In this study, we analyzed visual features of images to detect advertising images from scanned images of various magazines. The aim is to identify key features of advertising images and to apply them to real-world application. The proposed work will eventually help improve marketing strategies, which requires the classification of advertising images from magazines. We employed convolutional neural networks to classify scanned images as either advertisements or non-advertisements (i.e., articles). The results show that the proposed approach outperforms other classifiers and the related work in terms of accuracy. Full article
Open AccessArticle An Entropy-Based Knowledge Measure for Atanassov’s Intuitionistic Fuzzy Sets and Its Application to Multiple Attribute Decision Making
Entropy 2018, 20(12), 981; https://doi.org/10.3390/e20120981
Received: 22 November 2018 / Revised: 10 December 2018 / Accepted: 16 December 2018 / Published: 17 December 2018
PDF Full-text (443 KB)
Abstract
As the complementary concept of intuitionistic fuzzy entropy, the knowledge measure of Atanassov’s intuitionistic fuzzy sets (AIFSs) has attracted more attention and is still an open topic. The amount of knowledge is important to evaluate intuitionistic fuzzy information. An entropy-based knowledge measure for
[...] Read more.
As the complementary concept of intuitionistic fuzzy entropy, the knowledge measure of Atanassov’s intuitionistic fuzzy sets (AIFSs) has attracted more attention and is still an open topic. The amount of knowledge is important to evaluate intuitionistic fuzzy information. An entropy-based knowledge measure for AIFSs is defined in this paper to quantify the knowledge amount conveyed by AIFSs. An intuitive analysis on the properties of the knowledge amount in AIFSs is put forward to facilitate the introduction of axiomatic definition of the knowledge measure. Then we propose a new knowledge measure based on the entropy-based divergence measure with respect for the difference between the membership degree, the non-membership degree, and the hesitancy degree. The properties of the new knowledge measure are investigated in a mathematical viewpoint. Several examples are applied to illustrate the performance of the new knowledge measure. Comparison with several existing entropy and knowledge measures indicates that the proposed knowledge has a greater ability in discriminating different AIFSs and it is robust in quantifying the knowledge amount of different AIFSs. Lastly, the new knowledge measure is applied to the problem of multiple attribute decision making (MADM) in an intuitionistic fuzzy environment. Two models are presented to determine attribute weights in the cases that information on attribute weights is partially known and completely unknown. After obtaining attribute weights, we develop a new method to solve intuitionistic fuzzy MADM problems. An example is employed to show the effectiveness of the new MADM method. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessComment Comments on “The Principle of Least Action for Reversible Thermodynamic Processes and Cycles”, Entropy 2018, 20, 542
Entropy 2018, 20(12), 980; https://doi.org/10.3390/e20120980
Received: 12 September 2018 / Revised: 15 October 2018 / Accepted: 20 November 2018 / Published: 17 December 2018
PDF Full-text (917 KB) | HTML Full-text | XML Full-text
Abstract
The goal of this comment note is to express my concerns about the recent paper by Tian Zhao et al. (Entropy 2018, 20, 542). It is foreseen that this comment will stimulate a fruitful discussion of the issues involved. The
[...] Read more.
The goal of this comment note is to express my concerns about the recent paper by Tian Zhao et al. (Entropy 2018, 20, 542). It is foreseen that this comment will stimulate a fruitful discussion of the issues involved. The principle of the least thermodynamic action is applicable for the analysis of the Carnot cycle using the entropy (not heat) generation extrema theorem. The transversality conditions of the variational problem provide the rectangular shape of the ST diagram for the Carnot cycle. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Heat Transfer Performance of a Novel Multi-Baffle-Type Heat Sink
Entropy 2018, 20(12), 979; https://doi.org/10.3390/e20120979
Received: 14 November 2018 / Revised: 8 December 2018 / Accepted: 13 December 2018 / Published: 17 December 2018
PDF Full-text (7162 KB) | HTML Full-text | XML Full-text
Abstract
A new type of multi-baffle-type heat sink is proposed in this paper. The heat-transfer coefficient and pressure drop penalty of the employed six heat sink models are numerically investigated under five different inlet velocities. It is shown that Model 6 (M6) has excellent
[...] Read more.
A new type of multi-baffle-type heat sink is proposed in this paper. The heat-transfer coefficient and pressure drop penalty of the employed six heat sink models are numerically investigated under five different inlet velocities. It is shown that Model 6 (M6) has excellent heat transfer performance as its heat-transfer coefficient reaches a value of 1758.59 W/m2K with a pressure drop of 2.96 × 104 Pa, and the temperature difference between the maximum and the minimum temperature of the heating surface is 51.7 K. The results showed that the coolant for M6 is distributed evenly to each channel at the maximal degree. The phenomena of the maldistribution of temperature is effectively improved. Moreover, the thermal resistance and thermal enhancement factor for the six models is also examined. M6 possesses the lowest total thermal resistance and largest thermal enhancement factor compared to the other five models. Furthermore, an experimental platform is set up to verify the simulation results obtained for M6. The simulated heat-transfer coefficient and pressure drop values agree well with the experimental results. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessReview The Price Equation Program: Simple Invariances Unify Population Dynamics, Thermodynamics, Probability, Information and Inference
Entropy 2018, 20(12), 978; https://doi.org/10.3390/e20120978
Received: 22 October 2018 / Revised: 26 November 2018 / Accepted: 14 December 2018 / Published: 16 December 2018
PDF Full-text (429 KB)
Abstract
The fundamental equations of various disciplines often seem to share the same basic structure. Natural selection increases information in the same way that Bayesian updating increases information. Thermodynamics and the forms of common probability distributions express maximum increase in entropy, which appears mathematically
[...] Read more.
The fundamental equations of various disciplines often seem to share the same basic structure. Natural selection increases information in the same way that Bayesian updating increases information. Thermodynamics and the forms of common probability distributions express maximum increase in entropy, which appears mathematically as loss of information. Physical mechanics follows paths of change that maximize Fisher information. The information expressions typically have analogous interpretations as the Newtonian balance between force and acceleration, representing a partition between the direct causes of change and the opposing changes in the frame of reference. This web of vague analogies hints at a deeper common mathematical structure. I suggest that the Price equation expresses that underlying universal structure. The abstract Price equation describes dynamics as the change between two sets. One component of dynamics expresses the change in the frequency of things, holding constant the values associated with things. The other component of dynamics expresses the change in the values of things, holding constant the frequency of things. The separation of frequency from value generalizes Shannon’s separation of the frequency of symbols from the meaning of symbols in information theory. The Price equation’s generalized separation of frequency and value reveals a few simple invariances that define universal geometric aspects of change. For example, the conservation of total frequency, although a trivial invariance by itself, creates a powerful constraint on the geometry of change. That constraint plus a few others seem to explain the common structural forms of the equations in different disciplines. From that abstract perspective, interpretations such as selection, information, entropy, force, acceleration, and physical work arise from the same underlying geometry expressed by the Price equation. Full article
Open AccessArticle Bayesian 3D X-Ray Computed Tomography with a Hierarchical Prior Model for Sparsity in Haar Transform Domain
Entropy 2018, 20(12), 977; https://doi.org/10.3390/e20120977
Received: 26 October 2018 / Revised: 3 December 2018 / Accepted: 11 December 2018 / Published: 16 December 2018
PDF Full-text (1967 KB)
Abstract
In this paper, a hierarchical prior model based on the Haar transformation and an appropriate Bayesian computational method for X-ray CT reconstruction are presented. Given the piece-wise continuous property of the object, a multilevel Haar transformation is used to associate a sparse representation
[...] Read more.
In this paper, a hierarchical prior model based on the Haar transformation and an appropriate Bayesian computational method for X-ray CT reconstruction are presented. Given the piece-wise continuous property of the object, a multilevel Haar transformation is used to associate a sparse representation for the object. The sparse structure is enforced via a generalized Student-t distribution ( S t g ), expressed as the marginal of a normal-inverse Gamma distribution. The proposed model and corresponding algorithm are designed to adapt to specific 3D data sizes and to be used in both medical and industrial Non-Destructive Testing (NDT) applications. In the proposed Bayesian method, a hierarchical structured prior model is proposed, and the parameters are iteratively estimated. The initialization of the iterative algorithm uses the parameters of the prior distributions. A novel strategy for the initialization is presented and proven experimentally. We compare the proposed method with two state-of-the-art approaches, showing that our method has better reconstruction performance when fewer projections are considered and when projections are acquired from limited angles. Full article
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
Open AccessArticle Entropy Measures in Analysis of Head up Tilt Test Outcome for Diagnosing Vasovagal Syncope
Entropy 2018, 20(12), 976; https://doi.org/10.3390/e20120976
Received: 21 November 2018 / Revised: 11 December 2018 / Accepted: 12 December 2018 / Published: 16 December 2018
PDF Full-text (1207 KB) | HTML Full-text | XML Full-text
Abstract
The paper presents possible applications of entropy measures in analysis of biosignals recorded during head up tilt testing (HUTT) in patients with suspected vasovagal syndrome. The study group comprised 80 patients who developed syncope during HUTT (57 in the passive phase of the
[...] Read more.
The paper presents possible applications of entropy measures in analysis of biosignals recorded during head up tilt testing (HUTT) in patients with suspected vasovagal syndrome. The study group comprised 80 patients who developed syncope during HUTT (57 in the passive phase of the test (HUTT(+) group) and 23 who had negative result of passive phase and developed syncope after provocation with nitroglycerine (HUTT(−) group)). The paper focuses on assessment of monitored signals’ complexity (heart rate expressed as R-R intervals (RRI), blood pressure (sBP, dBP) and stroke volume (SV)) using various types of entropy measures (Sample Entropy (SE), Fuzzy Entropy (FE), Shannon Entropy (Sh), Conditional Entropy (CE), Permutation Entropy (PE)). Assessment of the complexity of signals in supine position indicated presence of significant differences between HUTT(+) versus HUTT(−) patients only for Conditional Entropy (CE(RRI)). Values of CE(RRI) higher than 0.7 indicate likelihood of a positive result of HUTT already at the passive phase. During tilting, in the pre-syncope phase, significant differences were found for: (SE(sBP), SE(dBP), FE(RRI), FE(sBP), FE(dBP), FE(SV), Sh(sBP), Sh(SV), CE(sBP), CE(dBP)). HUTT(+) patients demonstrated significant changes in signals’ complexity more frequently than HUTT(−) patients. When comparing entropy measurements done in the supine position with those during tilting, SV assessed in HUTT(+) patients was the only parameter for which all tested measures of entropy (SE(SV), FE(SV), Sh(SV), CE(SV), PE(SV)) showed significant differences. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Figures

Figure 1

Open AccessArticle Representation Lost: The Case for a Relational Interpretation of Quantum Mechanics
Entropy 2018, 20(12), 975; https://doi.org/10.3390/e20120975
Received: 26 October 2018 / Revised: 2 December 2018 / Accepted: 11 December 2018 / Published: 15 December 2018
PDF Full-text (867 KB) | HTML Full-text | XML Full-text
Abstract
Contemporary non-representationalist interpretations of the quantum state (especially QBism, neo-Copenhagen views, and the relational interpretation) maintain that quantum states codify observer-relative information. This paper provides an extensive defense of such views, while emphasizing the advantages of, specifically, the relational interpretation.
[...] Read more.
Contemporary non-representationalist interpretations of the quantum state (especially QBism, neo-Copenhagen views, and the relational interpretation) maintain that quantum states codify observer-relative information. This paper provides an extensive defense of such views, while emphasizing the advantages of, specifically, the relational interpretation. The argument proceeds in three steps: (1) I present a classical example (which exemplifies the spirit of the relational interpretation) to illustrate why some of the most persistent charges against non-representationalism have been misguided. (2) The special focus is placed on dynamical evolution. Non-representationalists often motivate their views by interpreting the collapse postulate as the quantum mechanical analogue of Bayesian probability updating. However, it is not clear whether one can also interpret the Schrödinger equation as a form of rational opinion updating. Using results due to Hughes & van Fraassen as well as Lisi, I argue that unitary evolution has a counterpart in classical probability theory: in both cases (quantum and classical) probabilities relative to a non-participating observer evolve according to an entropy maximizing principle (and can be interpreted as rational opinion updating). (3) Relying on a thought-experiment by Frauchiger and Renner, I discuss the differences between quantum and classical probability models. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Figures

Figure 1

Open AccessArticle An Image Encryption Algorithm Based on Time-Delay and Random Insertion
Entropy 2018, 20(12), 974; https://doi.org/10.3390/e20120974
Received: 19 November 2018 / Revised: 9 December 2018 / Accepted: 13 December 2018 / Published: 15 December 2018
PDF Full-text (2004 KB) | HTML Full-text | XML Full-text
Abstract
An image encryption algorithm is presented in this paper based on a chaotic map. Different from traditional methods based on the permutation-diffusion structure, the keystream here depends on both secret keys and the pre-processed image. In particular, in the permutation stage, a middle
[...] Read more.
An image encryption algorithm is presented in this paper based on a chaotic map. Different from traditional methods based on the permutation-diffusion structure, the keystream here depends on both secret keys and the pre-processed image. In particular, in the permutation stage, a middle parameter is designed to revise the outputs of the chaotic map, yielding a temporal delay phenomena. Then, diffusion operation is applied after a group of random numbers is inserted into the permuted image. Therefore, the gray distribution can be changed and is different from that of the plain-image. This insertion acts as a one-time pad. Moreover, the keystream for the diffusion operation is designed to be influenced by secret keys assigned in the permutation stage. As a result, the two stages are mixed together to strengthen entirety. Experimental tests also suggest that our algorithm, permutation– insertion–diffusion (PID), performs better when expecting secure communications for images. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle Revealing the Work Cost of Generalized Thermal Baths
Entropy 2018, 20(12), 973; https://doi.org/10.3390/e20120973
Received: 11 November 2018 / Revised: 1 December 2018 / Accepted: 12 December 2018 / Published: 15 December 2018
PDF Full-text (3682 KB) | HTML Full-text | XML Full-text
Abstract
We derive the work cost of using generalized thermal baths from the physical equivalence of quantum mechanics under unitary transformations. We demonstrate our method by considering a qubit extracting work from a single bath to amplify a cavity field. There, we find that
[...] Read more.
We derive the work cost of using generalized thermal baths from the physical equivalence of quantum mechanics under unitary transformations. We demonstrate our method by considering a qubit extracting work from a single bath to amplify a cavity field. There, we find that only half of the work investment is converted into useful output, the rest being wasted as heat. These findings establish the method as a promising tool for studying quantum resources within the framework of classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Figures

Figure 1

Open AccessCorrection Correction: Chen, W.; Huang, S.P. Evaluating Flight Crew Performance by a Bayesian Network Model. Entropy 2018, 20, 178
Entropy 2018, 20(12), 972; https://doi.org/10.3390/e20120972
Received: 26 November 2018 / Accepted: 6 December 2018 / Published: 14 December 2018
PDF Full-text (150 KB) | HTML Full-text | XML Full-text
Abstract
After publication of the research paper [...] Full article
Open AccessArticle Landauer’s Principle as a Special Case of Galois Connection
Entropy 2018, 20(12), 971; https://doi.org/10.3390/e20120971
Received: 14 October 2018 / Revised: 3 December 2018 / Accepted: 12 December 2018 / Published: 14 December 2018
PDF Full-text (345 KB) | HTML Full-text | XML Full-text
Abstract
It is demonstrated how to construct a Galois connection between two related systems with entropy. The construction, called the Landauer’s connection, describes coupling between two systems with entropy. It is straightforward and transfers changes in one system to the other one, preserving ordering
[...] Read more.
It is demonstrated how to construct a Galois connection between two related systems with entropy. The construction, called the Landauer’s connection, describes coupling between two systems with entropy. It is straightforward and transfers changes in one system to the other one, preserving ordering structure induced by entropy. The Landauer’s connection simplifies the description of the classical Landauer’s principle for computational systems. Categorification and generalization of the Landauer’s principle opens the area of modeling of various systems in presence of entropy in abstract terms. Full article
Figures

Figure 1

Open AccessArticle Complexity and Entropy Analysis of a Multi-Channel Supply Chain Considering Channel Cooperation and Service
Entropy 2018, 20(12), 970; https://doi.org/10.3390/e20120970
Received: 23 November 2018 / Revised: 9 December 2018 / Accepted: 9 December 2018 / Published: 14 December 2018
PDF Full-text (6933 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, based on the background of channel cooperation and service of the supply chain, this paper constructs a Nash game model and a Stackeberg game model in the multi-channel supply chain considering an online-to-store channel (OSC). Based on maximizing the profits
[...] Read more.
In this paper, based on the background of channel cooperation and service of the supply chain, this paper constructs a Nash game model and a Stackeberg game model in the multi-channel supply chain considering an online-to-store channel (OSC). Based on maximizing the profits and the bounded rationality expectation rule (BRE), this paper builds a dynamic game model, respectively, and analyzes the stability of the equilibrium points by mathematical analysis and explores the influences of parameters on stability domain and entropy of the system by using bifurcation diagram, the entropy diagram, the largest Lyapunov exponent and the chaotic attractor etc. Besides, the influences of service level and profit distribution rate on system’s profit are discussed. The theoretical results show that the greater the service level and profit distribution rate are, the smaller the stability domain of the system is; the system will go into chaotic state and the system’s entropy will increase when operators adjust her/his price decision quickly; when the manufacturer or the retailer keeps service level in the appropriate value which is conducive to maximizing her/his profit; the manufacturer should carefully set the service level of OSC to ensure the system’s profit; in Nash game model, the stability of the system weakens than that in Stackelberg game model. Furthermore, this paper puts forward some suggestions to help the manufacturer and retailer in multi-channel supply chain to do the better decision. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Application of Bayesian Networks and Information Theory to Estimate the Occurrence of Mid-Air Collisions Based on Accident Precursors
Entropy 2018, 20(12), 969; https://doi.org/10.3390/e20120969
Received: 12 November 2018 / Revised: 8 December 2018 / Accepted: 11 December 2018 / Published: 14 December 2018
PDF Full-text (5283 KB) | HTML Full-text | XML Full-text
Abstract
This paper combines Bayesian networks (BN) and information theory to model the likelihood of severe loss of separation (LOS) near accidents, which are considered mid-air collision (MAC) precursors. BN is used to analyze LOS contributing factors and the multi-dependent relationship of causal factors,
[...] Read more.
This paper combines Bayesian networks (BN) and information theory to model the likelihood of severe loss of separation (LOS) near accidents, which are considered mid-air collision (MAC) precursors. BN is used to analyze LOS contributing factors and the multi-dependent relationship of causal factors, while Information Theory is used to identify the LOS precursors that provide the most information. The combination of the two techniques allows us to use data on LOS causes and precursors to define warning scenarios that could forecast a major LOS with severity A or a near accident, and consequently the likelihood of a MAC. The methodology is illustrated with a case study that encompasses the analysis of LOS that have taken place within the Spanish airspace during a period of four years. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Figures

Figure 1

Open AccessArticle Semi-Supervised Minimum Error Entropy Principle with Distributed Method
Entropy 2018, 20(12), 968; https://doi.org/10.3390/e20120968
Received: 26 October 2018 / Revised: 8 December 2018 / Accepted: 10 December 2018 / Published: 14 December 2018
PDF Full-text (795 KB) | HTML Full-text | XML Full-text
Abstract
The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information
[...] Read more.
The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of unlabeled data can enhance the learning ability of the distributed MEE algorithm. Our result proves that the mean squared error of the distributed gradient descent MEE algorithm can be minimax optimal for regression if the number of local machines increases polynomially as the total datasize. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle Effect of Atomic Size Difference on the Microstructure and Mechanical Properties of High-Entropy Alloys
Entropy 2018, 20(12), 967; https://doi.org/10.3390/e20120967
Received: 29 October 2018 / Revised: 5 December 2018 / Accepted: 10 December 2018 / Published: 14 December 2018
PDF Full-text (3421 KB) | HTML Full-text | XML Full-text
Abstract
The effects of atomic size difference on the microstructure and mechanical properties of single face-centered cubic (FCC) phase high-entropy alloys are studied. Single FCC phase high-entropy alloys, namely, CoCrFeMnNi, Al0.2CoCrFeMnNi, and Al0.3CoCrCu0.3FeNi, display good workability. The recrystallization
[...] Read more.
The effects of atomic size difference on the microstructure and mechanical properties of single face-centered cubic (FCC) phase high-entropy alloys are studied. Single FCC phase high-entropy alloys, namely, CoCrFeMnNi, Al0.2CoCrFeMnNi, and Al0.3CoCrCu0.3FeNi, display good workability. The recrystallization and grain growth rates are compared during annealing. Adding Al with 0.2 molar ratio into CoCrFeMnNi retains the single FCC phase. Its atomic size difference increases from 1.18% to 2.77%, and the activation energy of grain growth becomes larger than that of CoCrFeMnNi. The as-homogenized state of Al0.3CoCrCu0.3FeNi high-entropy alloy becomes a single FCC structure. Its atomic size difference is 3.65%, and the grain growth activation energy is the largest among these three kinds of single-phase high-entropy alloys. At ambient temperature, the mechanical properties of Al0.3CoCrCu0.3FeNi are better than those of CoCrFeMnNi because of high lattice distortion and high solid solution hardening. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Likelihood Ratio Testing under Measurement Errors
Entropy 2018, 20(12), 966; https://doi.org/10.3390/e20120966
Received: 13 November 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 13 December 2018
PDF Full-text (284 KB) | HTML Full-text | XML Full-text
Abstract
We consider the likelihood ratio test of a simple null hypothesis (with density f0) against a simple alternative hypothesis (with density g0) in the situation that observations Xi are mismeasured due to the presence of measurement errors. Thus
[...] Read more.
We consider the likelihood ratio test of a simple null hypothesis (with density f 0 ) against a simple alternative hypothesis (with density g 0 ) in the situation that observations X i are mismeasured due to the presence of measurement errors. Thus instead of X i for i = 1 , , n , we observe Z i = X i + δ V i with unobservable parameter δ and unobservable random variable V i . When we ignore the presence of measurement errors and perform the original test, the probability of type I error becomes different from the nominal value, but the test is still the most powerful among all tests on the modified level. Further, we derive the minimax test of some families of misspecified hypotheses and alternatives. The test exploits the concept of pseudo-capacities elaborated by Huber and Strassen (1973) and Buja (1986). A numerical experiment illustrates the principles and performance of the novel test. Full article
Open AccessArticle First-Principles Design of Refractory High Entropy Alloy VMoNbTaW
Entropy 2018, 20(12), 965; https://doi.org/10.3390/e20120965
Received: 28 November 2018 / Revised: 9 December 2018 / Accepted: 11 December 2018 / Published: 13 December 2018
PDF Full-text (3607 KB) | HTML Full-text | XML Full-text
Abstract
The elastic properties of seventy different compositions were calculated to optimize the composition of a V–Mo–Nb–Ta–W system. A new model called maximum entropy approach (MaxEnt) was adopted. The influence of each element was discussed. Molybdenum (Mo) and tungsten (W) are key elements for
[...] Read more.
The elastic properties of seventy different compositions were calculated to optimize the composition of a V–Mo–Nb–Ta–W system. A new model called maximum entropy approach (MaxEnt) was adopted. The influence of each element was discussed. Molybdenum (Mo) and tungsten (W) are key elements for the maintenance of elastic properties. The V–Mo–Nb–Ta–W system has relatively high values of C44, bulk modulus (B), shear modulus (G), and Young’s modulus (E), with high concentrations of Mo + W. Element W is brittle and has high density. Thus, low-density Mo can substitute part of W. Vanadium (V) has low density and plays an important role in decreasing the brittleness of the V–Mo–Nb–Ta–W system. Niobium (Nb) and tantalum (Ta) have relatively small influence on elastic properties. Furthermore, the calculated results can be used as a general guidance for the selection of a V–Mo–Nb–Ta–W system. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Classification of MRI Brain Images Using DNA Genetic Algorithms Optimized Tsallis Entropy and Support Vector Machine
Entropy 2018, 20(12), 964; https://doi.org/10.3390/e20120964
Received: 10 October 2018 / Revised: 19 November 2018 / Accepted: 11 December 2018 / Published: 13 December 2018
PDF Full-text (5182 KB) | HTML Full-text | XML Full-text
Abstract
As a non-invasive diagnostic tool, Magnetic Resonance Imaging (MRI) has been widely used in the field of brain imaging. The classification of MRI brain image conditions poses challenges both technically and clinically, as MRI is primarily used for soft tissue anatomy and can
[...] Read more.
As a non-invasive diagnostic tool, Magnetic Resonance Imaging (MRI) has been widely used in the field of brain imaging. The classification of MRI brain image conditions poses challenges both technically and clinically, as MRI is primarily used for soft tissue anatomy and can generate large amounts of detailed information about the brain conditions of a subject. To classify benign and malignant MRI brain images, we propose a new method. Discrete wavelet transform (DWT) is used to extract wavelet coefficients from MRI images. Then, Tsallis entropy with DNA genetic algorithm (DNA-GA) optimization parameters (called DNAGA-TE) was used to obtain entropy characteristics from DWT coefficients. At last, DNA-GA optimized support vector machine (called DNAGA-KSVM) with radial basis function (RBF) kernel, is applied as a classifier. In our experimental procedure, we use two kinds of images to validate the availability and effectiveness of the algorithm. One kind of data is the Simulated Brain Database and another kind of image is real MRI images which downloaded from Harvard Medical School website. Experimental results demonstrate that our method (DNAGA-TE+KSVM) obtained better classification accuracy. Full article
Figures

Figure 1

Open AccessArticle Entropy Churn Metrics for Fault Prediction in Software Systems
Entropy 2018, 20(12), 963; https://doi.org/10.3390/e20120963
Received: 5 November 2018 / Revised: 11 December 2018 / Accepted: 11 December 2018 / Published: 13 December 2018
PDF Full-text (1712 KB) | HTML Full-text | XML Full-text
Abstract
Fault prediction is an important research area that aids software development and the maintenance process. It is a field that has been continuously improving its approaches in order to reduce the fault resolution time and effort. With an aim to contribute towards building
[...] Read more.
Fault prediction is an important research area that aids software development and the maintenance process. It is a field that has been continuously improving its approaches in order to reduce the fault resolution time and effort. With an aim to contribute towards building new approaches for fault prediction, this paper proposes Entropy Churn Metrics (ECM) based on History Complexity Metrics (HCM) and Churn of Source Code Metrics (CHU). The study also compares performance of ECM with that of HCM. The performance of both these metrics is compared for 14 subsystems of 5different software projects: Android, Eclipse, Apache Http Server, Eclipse C/C++ Development Tooling (CDT), and Mozilla Firefox. The study also analyses the software subsystems on three parameters: (i) distribution of faults, (ii) subsystem size, and (iii) programming language, to determine which characteristics of software systems make HCM or ECM more preferred over others. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Figures

Figure 1

Open AccessArticle Range Entropy: A Bridge between Signal Complexity and Self-Similarity
Entropy 2018, 20(12), 962; https://doi.org/10.3390/e20120962
Received: 1 November 2018 / Revised: 3 December 2018 / Accepted: 6 December 2018 / Published: 13 December 2018
PDF Full-text (4501 KB) | HTML Full-text | XML Full-text
Abstract
Approximate entropy (ApEn) and sample entropy (SampEn) are widely used for temporal complexity analysis of real-world phenomena. However, their relationship with the Hurst exponent as a measure of self-similarity is not widely studied. Additionally, ApEn and SampEn are susceptible
[...] Read more.
Approximate entropy (ApEn) and sample entropy (SampEn) are widely used for temporal complexity analysis of real-world phenomena. However, their relationship with the Hurst exponent as a measure of self-similarity is not widely studied. Additionally, ApEn and SampEn are susceptible to signal amplitude changes. A common practice for addressing this issue is to correct their input signal amplitude by its standard deviation. In this study, we first show, using simulations, that ApEn and SampEn are related to the Hurst exponent in their tolerance r and embedding dimension m parameters. We then propose a modification to ApEn and SampEn called range entropy or RangeEn. We show that RangeEn is more robust to nonstationary signal changes, and it has a more linear relationship with the Hurst exponent, compared to ApEn and SampEn. RangeEn is bounded in the tolerance r-plane between 0 (maximum entropy) and 1 (minimum entropy) and it has no need for signal amplitude correction. Finally, we demonstrate the clinical usefulness of signal entropy measures for characterisation of epileptic EEG data as a real-world example. Full article
Figures

Figure 1

Open AccessArticle Multifractality of Pseudo-Velocities and Seismic Quiescence Associated with the Tehuantepec M8.2 EQ
Entropy 2018, 20(12), 961; https://doi.org/10.3390/e20120961
Received: 25 November 2018 / Revised: 10 December 2018 / Accepted: 10 December 2018 / Published: 13 December 2018
PDF Full-text (2072 KB) | HTML Full-text | XML Full-text
Abstract
By using earthquake catalogs, previous studies have reported evidence that some changes in the spatial and temporal organization of earthquake activity are observed before and after of a main shock. These previous studies have used different approaches for detecting clustering behavior and distance-events
[...] Read more.
By using earthquake catalogs, previous studies have reported evidence that some changes in the spatial and temporal organization of earthquake activity are observed before and after of a main shock. These previous studies have used different approaches for detecting clustering behavior and distance-events density in order to point out the asymmetric behavior of foreshocks and aftershocks. Here, we present a statistical analysis of the seismic activity related to the M w = 8.2 earthquake that occurred on 7 September 2017 in Mexico. First, we calculated the inter-event time and distance between successive events for the period 1 January 1998 until 20 October 2017 in a circular region centered at the epicenter of the M w = 8.2 EQ. Next, we introduced the concept of pseudo-velocity as the ratio between the inter-event distance and inter-event time. A sliding window is considered to estimate some statistical features of the pseudo-velocity sequence before the main shock. Specifically, we applied the multifractal method to detect changes in the spectrum of singularities for the period before the main event on 7 September. Our results point out that the multifractality associated with the pseudo-velocities exhibits noticeable changes in the characteristics of the spectra (more narrower) for approximately three years, from 2013 until 2016, which is preceded and followed by periods with wider spectra. On the other hand, we present an analysis of patterns of seismic quiescence before the M w = 8.2 earthquake based on the Schreider algorithm over a period of 27 years. We report the existence of an important period of seismic quietude, for six to seven years, from 2008 to 2015 approximately, known as the alpha stage, and a beta stage of resumption of seismic activity, with a duration of approximately three years until the occurrence of the great earthquake of magnitude M w = 8.2 . Our results are in general concordance with previous results reported for statistics based on magnitude temporal sequences. Full article
Figures

Figure 1

Open AccessArticle Intermediate-Temperature Creep Deformation and Microstructural Evolution of an Equiatomic FCC-Structured CoCrFeNiMn High-Entropy Alloy
Entropy 2018, 20(12), 960; https://doi.org/10.3390/e20120960
Received: 30 November 2018 / Revised: 9 December 2018 / Accepted: 9 December 2018 / Published: 12 December 2018
PDF Full-text (5191 KB) | HTML Full-text | XML Full-text
Abstract
The tensile creep behavior of an equiatomic CoCrFeNiMn high-entropy alloy was systematically investigated over an intermediate temperature range (500–600 °C) and applied stress (140–400 MPa). The alloy exhibited a stress-dependent transition from a low-stress region (LSR-region I) to a high-stress region (HSR-region II).
[...] Read more.
The tensile creep behavior of an equiatomic CoCrFeNiMn high-entropy alloy was systematically investigated over an intermediate temperature range (500–600 °C) and applied stress (140–400 MPa). The alloy exhibited a stress-dependent transition from a low-stress region (LSR-region I) to a high-stress region (HSR-region II). The LSR was characterized by a stress exponent of 5 to 6 and an average activation energy of 268 kJ mol−1, whereas the HSR showed much higher corresponding values of 8.9–14 and 380 kJ mol−1. Microstructural examinations on the deformed samples revealed remarkable dynamic recrystallization at higher stress levels. Dislocation jogging and tangling configurations were frequently observed in LSR and HSR at 550 and 600 °C, respectively. Moreover, dynamic precipitates identified as M23C6 or a Cr-rich σ phase were formed along grain boundaries in HSR. The diffusion-compensated strain rate versus modulus-compensated stress data analysis implied that the creep deformation in both stress regions was dominated by stress-assisted dislocation climb controlled by lattice diffusion. Nevertheless, the abnormally high stress exponents in HSR were ascribed to the coordinative contributions of dynamic recrystallization and dynamic precipitation. Simultaneously, the barriers imposed by these precipitates and severe initial deformation were referred to so as to increase the activation energy for creep deformation. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Some Order Preserving Inequalities for Cross Entropy and Kullback–Leibler Divergence
Entropy 2018, 20(12), 959; https://doi.org/10.3390/e20120959
Received: 8 November 2018 / Revised: 2 December 2018 / Accepted: 8 December 2018 / Published: 12 December 2018
PDF Full-text (836 KB) | HTML Full-text | XML Full-text
Abstract
Cross entropy and Kullback–Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied
[...] Read more.
Cross entropy and Kullback–Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied for optimization in machine learning. K-L divergence also stands independently as a commonly used metric for measuring the difference between two distributions. In this paper, we introduce new inequalities regarding cross entropy and K-L divergence by using the fact that cross entropy is the negated logarithm of the weighted geometric mean. We first apply the well-known rearrangement inequality, followed by a recent theorem on weighted Kolmogorov means, and, finally, we introduce a new theorem that directly applies to inequalities between K-L divergences. To illustrate our results, we show numerical examples of distributions. Full article
Figures

Figure 1

Open AccessArticle Spatial Heterogeneity in the Occurrence Probability of Rainstorms over China
Entropy 2018, 20(12), 958; https://doi.org/10.3390/e20120958
Received: 9 November 2018 / Revised: 4 December 2018 / Accepted: 10 December 2018 / Published: 12 December 2018
PDF Full-text (3685 KB) | HTML Full-text | XML Full-text
Abstract
Detecting the spatial heterogeneity in the potential occurrence probability of water disasters is a foremost and critical issue for the prevention and mitigation of water disasters. However, it is also a challenging task due to the lack of effective approaches. In the article,
[...] Read more.
Detecting the spatial heterogeneity in the potential occurrence probability of water disasters is a foremost and critical issue for the prevention and mitigation of water disasters. However, it is also a challenging task due to the lack of effective approaches. In the article, the entropy index was employed and those daily rainfall data at 520 stations were used to investigate the occurrences of rainstorms in China. Results indicated that the entropy results were mainly determined by statistical characters (mean value and standard deviation) of rainfall data, and can categorically describe the spatial heterogeneity in the occurrence of rainstorms by considering both their occurrence frequencies and magnitudes. Smaller entropy values mean that rainstorm events with bigger magnitudes were more likely to occur. Moreover, the spatial distribution of entropy values kept a good relationship with the hydroclimate conditions, described by the aridity index. In China, rainstorms are more to likely occur in the Pearl River basin, Southeast River basin, lower-reach of the Yangtze River basin, Huai River basin, and southwest corner of China. In summary, the entropy index can be an effective alternative for quantifying the potential occurrence probability of rainstorms. Four thresholds of entropy value were given to distinguish the occurrence frequency of rainstorms as five levels: very high, high, mid, low and very low, which can be a helpful reference for the study of daily rainstorms in other basins and regions. Full article
Figures

Figure 1

Open AccessArticle An Analysis of Deterministic Chaos as an Entropy Source for Random Number Generators
Entropy 2018, 20(12), 957; https://doi.org/10.3390/e20120957
Received: 6 October 2018 / Revised: 23 October 2018 / Accepted: 25 October 2018 / Published: 11 December 2018
PDF Full-text (2908 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an analytical study on the use of deterministic chaos as an entropy source for the generation of random numbers. The chaotic signal generated by a phase-locked loop (PLL) device is investigated using numerical simulations. Depending on the system parameters, the
[...] Read more.
This paper presents an analytical study on the use of deterministic chaos as an entropy source for the generation of random numbers. The chaotic signal generated by a phase-locked loop (PLL) device is investigated using numerical simulations. Depending on the system parameters, the chaos originating from the PLL device can be either bounded or unbounded in the phase direction. Bounded and unbounded chaos differs in terms of the flatness of the power spectrum associated with the chaotic signal. Random bits are generated by regular sampling of the signal from bounded and unbounded chaos. A white Gaussian noise source is also sampled regularly to generate random bits. By varying the sampling frequency, and based on the autocorrelation and the approximate entropy analysis of the resulting bit sequences, a comparison is made between bounded chaos, unbounded chaos and Gaussian white noise as an entropy source for random number generators. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems)
Figures

Figure 1

Back to Top