Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 21, Issue 1 (January 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) A key aspect of the brain activity underlying consciousness and cognition appears to be complex [...] Read more.
View options order results:
result details:
Displaying articles 1-99
Export citation of selected articles as:
Open AccessFeature PaperArticle Simple Stopping Criteria for Information Theoretic Feature Selection
Entropy 2019, 21(1), 99; https://doi.org/10.3390/e21010099
Received: 4 December 2018 / Revised: 21 January 2019 / Accepted: 21 January 2019 / Published: 21 January 2019
Viewed by 549 | PDF Full-text (281 KB) | HTML Full-text | XML Full-text
Abstract
Feature selection aims to select the smallest feature subset that yields the minimum generalization error. In the rich literature in feature selection, information theory-based approaches seek a subset of features such that the mutual information between the selected features and the class labels [...] Read more.
Feature selection aims to select the smallest feature subset that yields the minimum generalization error. In the rich literature in feature selection, information theory-based approaches seek a subset of features such that the mutual information between the selected features and the class labels is maximized. Despite the simplicity of this objective, there still remain several open problems in optimization. These include, for example, the automatic determination of the optimal subset size (i.e., the number of features) or a stopping criterion if the greedy searching strategy is adopted. In this paper, we suggest two stopping criteria by just monitoring the conditional mutual information (CMI) among groups of variables. Using the recently developed multivariate matrix-based Rényi’s α-entropy functional, which can be directly estimated from data samples, we showed that the CMI among groups of variables can be easily computed without any decomposition or approximation, hence making our criteria easy to implement and seamlessly integrated into any existing information theoretic feature selection methods with a greedy search strategy. Full article
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
Figures

Figure 1

Open AccessArticle Cooling Effectiveness of a Data Center Room under Overhead Airflow via Entropy Generation Assessment in Transient Scenarios
Entropy 2019, 21(1), 98; https://doi.org/10.3390/e21010098
Received: 6 December 2018 / Revised: 12 January 2019 / Accepted: 18 January 2019 / Published: 21 January 2019
Cited by 1 | Viewed by 463 | PDF Full-text (2336 KB) | HTML Full-text | XML Full-text
Abstract
Forecasting data center cooling demand remains a primary thermal management challenge in an increasingly larger global energy-consuming industry. This paper proposes a dynamic modeling approach to evaluate two different strategies for delivering cold air into a data center room. The common cooling method [...] Read more.
Forecasting data center cooling demand remains a primary thermal management challenge in an increasingly larger global energy-consuming industry. This paper proposes a dynamic modeling approach to evaluate two different strategies for delivering cold air into a data center room. The common cooling method provides air through perforated floor tiles by means of a centralized distribution system, hindering flow management at the aisle level. We propose an idealized system such that five overhead heat exchangers are located above the aisle and handle the entire server cooling demand. In one case, the overhead heat exchangers force the airflow downwards into the aisle (Overhead Downward Flow (ODF)); in the other case, the flow is forced to move upwards (Overhead Upward Flow (OUF)). A complete fluid dynamic, heat transfer, and thermodynamic analysis is proposed to model the system’s thermal performance under both steady state and transient conditions. Inside the servers and heat exchangers, the flow and heat transfer processes are modeled using a set of differential equations solved in MATLAB™ 2017a. This solution is coupled with ANSYS-Fluent™ 18, which computes the three-dimensional velocity, temperature, and turbulence on the Airside. The two approaches proposed (ODF and OUF) are evaluated and compared by estimating their cooling effectiveness and the local Entropy Generation. The latter allows identifying the zones within the room responsible for increasing the inefficiencies (irreversibilities) of the system. Both approaches demonstrated similar performance, with a small advantage shown by OUF. The results of this investigation demonstrated a promising approach of data center on-demand cooling scenarios. Full article
Figures

Figure 1

Open AccessArticle Objective 3D Printed Surface Quality Assessment Based on Entropy of Depth Maps
Entropy 2019, 21(1), 97; https://doi.org/10.3390/e21010097
Received: 20 December 2018 / Revised: 16 January 2019 / Accepted: 18 January 2019 / Published: 21 January 2019
Viewed by 488 | PDF Full-text (3497 KB) | HTML Full-text | XML Full-text
Abstract
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the [...] Read more.
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the surface quality of the 3D printed objects proposed in the paper, which is based on the analysis of depth maps, allows for determining the quality of surfaces during printing for the devices equipped with the built-in 3D scanners. In the case of detected low quality, some corrections can be made or the printing process may be aborted to save the filament, time and energy. The application of the entropy analysis of the 3D scans allows evaluating the surface regularity independently on the color of the filament in contrast to many other possible methods based on the analysis of visible light images. The results obtained using the proposed approach are encouraging and further combination of the proposed approach with camera-based methods might be possible as well. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle Fault Diagnosis of Rolling Element Bearings with a Two-Step Scheme Based on Permutation Entropy and Random Forests
Entropy 2019, 21(1), 96; https://doi.org/10.3390/e21010096
Received: 12 December 2018 / Revised: 9 January 2019 / Accepted: 16 January 2019 / Published: 21 January 2019
Viewed by 408 | PDF Full-text (2918 KB) | HTML Full-text | XML Full-text
Abstract
This study presents a two-step fault diagnosis scheme combined with statistical classification and random forests-based classification for rolling element bearings. Considering the inequality of features sensitivity in different diagnosis steps, the proposed method utilizes permutation entropy and variational mode decomposition to depict vibration [...] Read more.
This study presents a two-step fault diagnosis scheme combined with statistical classification and random forests-based classification for rolling element bearings. Considering the inequality of features sensitivity in different diagnosis steps, the proposed method utilizes permutation entropy and variational mode decomposition to depict vibration signals under single scale and multiscale. In the first step, the permutation entropy features on the single scale of original signals are extracted and the statistical classification model based on Chebyshev’s inequality is constructed to detect the faults with a preliminary acquaintance of the bearing condition. In the second step, vibration signals with fault conditions are firstly decomposed into a collection of intrinsic mode functions by using variational mode decomposition and then multiscale permutation entropy features derived from each mono-component are extracted to identify the specific fault types. In order to improve the classification ability of the characteristic data, the out-of-bag estimation of random forests is firstly employed to reelect and refine the original multiscale permutation entropy features. Then the refined features are considered as the input data to train the random forests-based classification model. Finally, the condition data of bearings with different fault conditions are employed to evaluate the performance of the proposed method. The results indicate that the proposed method can effectively identify the working conditions and fault types of rolling element bearings. Full article
Figures

Figure 1

Open AccessArticle Co-Association Matrix-Based Multi-Layer Fusion for Community Detection in Attributed Networks
Entropy 2019, 21(1), 95; https://doi.org/10.3390/e21010095
Received: 3 December 2018 / Revised: 17 January 2019 / Accepted: 17 January 2019 / Published: 20 January 2019
Viewed by 455 | PDF Full-text (710 KB) | HTML Full-text | XML Full-text
Abstract
Community detection is a challenging task in attributed networks, due to the data inconsistency between network topological structure and node attributes. The problem of how to effectively and robustly fuse multi-source heterogeneous data plays an important role in community detection algorithms. Although some [...] Read more.
Community detection is a challenging task in attributed networks, due to the data inconsistency between network topological structure and node attributes. The problem of how to effectively and robustly fuse multi-source heterogeneous data plays an important role in community detection algorithms. Although some algorithms taking both topological structure and node attributes into account have been proposed in recent years, the fusion strategy is simple and usually adopts the linear combination method. As a consequence of this, the detected community structure is vulnerable to small variations of the input data. In order to overcome this challenge, we develop a novel two-layer representation to capture the latent knowledge from both topological structure and node attributes in attributed networks. Then, we propose a weighted co-association matrix-based fusion algorithm (WCMFA) to detect the inherent community structure in attributed networks by using multi-layer fusion strategies. It extends the community detection method from a single-view to a multi-view style, which is consistent with the thinking model of human beings. Experiments show that our method is superior to the state-of-the-art community detection algorithms for attributed networks. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessReview Transients as the Basis for Information Flow in Complex Adaptive Systems
Entropy 2019, 21(1), 94; https://doi.org/10.3390/e21010094
Received: 31 December 2018 / Revised: 17 January 2019 / Accepted: 19 January 2019 / Published: 20 January 2019
Viewed by 512 | PDF Full-text (991 KB) | HTML Full-text | XML Full-text
Abstract
Information is the fundamental currency of naturally occurring complex adaptive systems, whether they are individual organisms or collective social insect colonies. Information appears to be more important than energy in determining the behavior of these systems. However, it is not the quantity of [...] Read more.
Information is the fundamental currency of naturally occurring complex adaptive systems, whether they are individual organisms or collective social insect colonies. Information appears to be more important than energy in determining the behavior of these systems. However, it is not the quantity of information but rather its salience or meaning which is significant. Salience is not, in general, associated with instantaneous events but rather with spatio-temporal transients of events. This requires a shift in theoretical focus from instantaneous states towards spatio-temporal transients as the proper object for studying information flow in naturally occurring complex adaptive systems. A primitive form of salience appears in simple complex systems models in the form of transient induced global response synchronization (TIGoRS). Sparse random samplings of spatio-temporal transients may induce stable collective responses from the system, establishing a stimulus–response relationship between the system and its environment, with the system parsing its environment into salient and non-salient stimuli. In the presence of TIGoRS, an embedded complex dynamical system becomes a primitive automaton, modeled as a Sulis machine. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Figures

Figure 1

Open AccessArticle Bayesian Analysis of Femtosecond Pump-Probe Photoelectron-Photoion Coincidence Spectra with Fluctuating Laser Intensities
Entropy 2019, 21(1), 93; https://doi.org/10.3390/e21010093
Received: 6 December 2018 / Revised: 10 January 2019 / Accepted: 16 January 2019 / Published: 19 January 2019
Viewed by 517 | PDF Full-text (1206 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments [...] Read more.
This paper employs Bayesian probability theory for analyzing data generated in femtosecond pump-probe photoelectron-photoion coincidence (PEPICO) experiments. These experiments allow investigating ultrafast dynamical processes in photoexcited molecules. Bayesian probability theory is consistently applied to data analysis problems occurring in these types of experiments such as background subtraction and false coincidences. We previously demonstrated that the Bayesian formalism has many advantages, amongst which are compensation of false coincidences, no overestimation of pump-only contributions, significantly increased signal-to-noise ratio, and applicability to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, our approach allows running experiments at higher ionization rates, resulting in an appreciable reduction of data acquisition times. In addition to our previous paper, we include fluctuating laser intensities, of which the straightforward implementation highlights yet another advantage of the Bayesian formalism. Our method is thoroughly scrutinized by challenging mock data, where we find a minor impact of laser fluctuations on false coincidences, yet a noteworthy influence on background subtraction. We apply our algorithm to data obtained in experiments and discuss the impact of laser fluctuations on the data analysis. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Remote Sampling with Applications to General Entanglement Simulation
Entropy 2019, 21(1), 92; https://doi.org/10.3390/e21010092
Received: 13 June 2018 / Revised: 12 January 2019 / Accepted: 15 January 2019 / Published: 19 January 2019
Viewed by 490 | PDF Full-text (375 KB) | HTML Full-text | XML Full-text
Abstract
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann’s rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties [...] Read more.
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann’s rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties and also exhibit a trade-off between the number of rounds of the rejection algorithm and the number of bits transmitted in the initial phase. Finally, we apply remote sampling to the simulation of quantum entanglement in its essentially most general form possible, when an arbitrary finite number m of parties share systems of arbitrary finite dimensions on which they apply arbitrary measurements (not restricted to being projective measurements, but restricted to finitely many possible outcomes). In case the dimension of the systems and the number of possible outcomes per party are bounded by a constant, it suffices to communicate an expected O ( m 2 ) bits in order to simulate exactly the outcomes that these measurements would have produced on those systems. Full article
Figures

Figure 1

Open AccessArticle Quantitative Quality Evaluation of Software Products by Considering Summary and Comments Entropy of a Reported Bug
Entropy 2019, 21(1), 91; https://doi.org/10.3390/e21010091
Received: 18 October 2018 / Revised: 20 December 2018 / Accepted: 6 January 2019 / Published: 19 January 2019
Viewed by 472 | PDF Full-text (4657 KB) | HTML Full-text | XML Full-text
Abstract
A software bug is characterized by its attributes. Various prediction models have been developed using these attributes to enhance the quality of software products. The reporting of bugs leads to high irregular patterns. The repository size is also increasing with enormous rate, resulting [...] Read more.
A software bug is characterized by its attributes. Various prediction models have been developed using these attributes to enhance the quality of software products. The reporting of bugs leads to high irregular patterns. The repository size is also increasing with enormous rate, resulting in uncertainty and irregularities. These uncertainty and irregularities are termed as veracity in the context of big data. In order to quantify these irregular and uncertain patterns, the authors have appliedentropy-based measures of the terms reported in the summary and the comments submitted by the users. Both uncertainties and irregular patterns have been taken care of byentropy-based measures. In this paper, the authors considered that the bug fixing process does not only depend upon the calendar time, testing effort and testing coverage, but it also depends on the bug summary description and comments. The paper proposed bug dependency-based mathematical models by considering the summary description of bugs and comments submitted by users in terms of the entropy-based measures. The models were validated on different Eclipse project products. The models proposed in the literature have different types of growth curves. The models mainly follow exponential, S-shaped or mixtures of both types of curves. In this paper, the proposed models were compared with the modelsfollowingexponential, S-shaped and mixtures of both types of curves. Full article
Figures

Figure 1

Open AccessArticle Using Multiscale Entropy to Assess the Efficacy of Local Cooling on Reactive Hyperemia in People with a Spinal Cord Injury
Entropy 2019, 21(1), 90; https://doi.org/10.3390/e21010090
Received: 8 December 2018 / Revised: 15 January 2019 / Accepted: 15 January 2019 / Published: 18 January 2019
Viewed by 479 | PDF Full-text (2789 KB) | HTML Full-text | XML Full-text
Abstract
Pressure ulcers are one of the most common complications of a spinal cord injury (SCI). Prolonged unrelieved pressure is thought to be the primary causative factor resulting in tissue ischemia and eventually pressure ulcers. Previous studies suggested that local cooling reduces skin ischemia [...] Read more.
Pressure ulcers are one of the most common complications of a spinal cord injury (SCI). Prolonged unrelieved pressure is thought to be the primary causative factor resulting in tissue ischemia and eventually pressure ulcers. Previous studies suggested that local cooling reduces skin ischemia of the compressed soft tissues based on smaller hyperemic responses. However, the effect of local cooling on nonlinear properties of skin blood flow (SBF) during hyperemia is unknown. In this study, 10 wheelchair users with SCI and 10 able-bodied (AB) controls underwent three experimental protocols, each of which included a 10-min period as baseline, a 20-min intervention period, and a 20-min period for recovering SBF. SBF was measured using a laser Doppler flowmetry. During the intervention period, a pressure of 60 mmHg was applied to the sacral skin, while three skin temperature settings were tested, including no temperature change, a decrease by 10 °C, and an increase by 10 °C, respectively. A multiscale entropy (MSE) method was employed to quantify the degree of regularity of blood flow oscillations (BFO) associated with the SBF control mechanisms during baseline and reactive hyperemia. The results showed that under pressure with cooling, skin BFO both in people with SCI and AB controls were more regular at multiple time scales during hyperemia compared to baseline, whereas under pressure with no temperature change and particularly pressure with heating, BFO were more irregular during hyperemia compared to baseline. Moreover, the results of surrogate tests indicated that changes in the degree of regularity of BFO from baseline to hyperemia were only partially attributed to changes in relative amplitudes of endothelial, neurogenic, and myogenic components of BFO. These findings support the use of MSE to assess the efficacy of local cooling on reactive hyperemia and assess the degree of skin ischemia in people with SCI. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Figures

Figure 1

Open AccessArticle Poincaré and Log–Sobolev Inequalities for Mixtures
Entropy 2019, 21(1), 89; https://doi.org/10.3390/e21010089
Received: 30 November 2018 / Revised: 30 December 2018 / Accepted: 11 January 2019 / Published: 18 January 2019
Viewed by 435 | PDF Full-text (325 KB) | HTML Full-text | XML Full-text
Abstract
This work studies mixtures of probability measures on Rn and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ2-distance. The estimation [...] Read more.
This work studies mixtures of probability measures on R n and gives bounds on the Poincaré and the log–Sobolev constants of two-component mixtures provided that each component satisfies the functional inequality, and both components are close in the χ 2 -distance. The estimation of those constants for a mixture can be far more subtle than it is for its parts. Even mixing Gaussian measures may produce a measure with a Hamiltonian potential possessing multiple wells leading to metastability and large constants in Sobolev type inequalities. In particular, the Poincaré constant stays bounded in the mixture parameter, whereas the log–Sobolev may blow up as the mixture ratio goes to 0 or 1. This observation generalizes the one by Chafaï and Malrieu to the multidimensional case. The behavior is shown for a class of examples to be not only a mere artifact of the method. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Open AccessArticle Symmetries among Multivariate Information Measures Explored Using Möbius Operators
Entropy 2019, 21(1), 88; https://doi.org/10.3390/e21010088
Received: 7 November 2018 / Revised: 9 January 2019 / Accepted: 16 January 2019 / Published: 18 January 2019
Viewed by 413 | PDF Full-text (2879 KB) | HTML Full-text | XML Full-text
Abstract
Relations between common information measures include the duality relations based on Möbius inversion on lattices, which are the direct consequence of the symmetries of the lattices of the sets of variables (subsets ordered by inclusion). In this paper we use the lattice and [...] Read more.
Relations between common information measures include the duality relations based on Möbius inversion on lattices, which are the direct consequence of the symmetries of the lattices of the sets of variables (subsets ordered by inclusion). In this paper we use the lattice and functional symmetries to provide a unifying formalism that reveals some new relations and systematizes the symmetries of the information functions. To our knowledge, this is the first systematic examination of the full range of relationships of this class of functions. We define operators on functions on these lattices based on the Möbius inversions that map functions into one another, which we call Möbius operators, and show that they form a simple group isomorphic to the symmetric group S3. Relations among the set of functions on the lattice are transparently expressed in terms of the operator algebra, and, when applied to the information measures, can be used to derive a wide range of relationships among diverse information measures. The Möbius operator algebra is then naturally generalized which yields an even wider range of new relationships. Full article
Figures

Figure 1

Open AccessArticle Parallel Lives: A Local-Realistic Interpretation of “Nonlocal” Boxes
Entropy 2019, 21(1), 87; https://doi.org/10.3390/e21010087
Received: 1 July 2018 / Revised: 10 January 2019 / Accepted: 11 January 2019 / Published: 18 January 2019
Viewed by 475 | PDF Full-text (6061 KB) | HTML Full-text | XML Full-text
Abstract
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in [...] Read more.
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in the simplest possible manner. Along the way, we reinterpret the celebrated 1935 argument of Einstein, Podolsky and Rosen, and come to the conclusion that they were right in their questioning the completeness of the Copenhagen version of quantum theory, provided one believes in a local-realistic universe. Throughout our journey, we strive to explain our views from first principles, without expecting mathematical sophistication nor specialized prior knowledge from the reader. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessArticle Information Entropy of Tight-Binding Random Networks with Losses and Gain: Scaling and Universality
Entropy 2019, 21(1), 86; https://doi.org/10.3390/e21010086
Received: 12 October 2018 / Revised: 1 January 2019 / Accepted: 15 January 2019 / Published: 18 January 2019
Viewed by 410 | PDF Full-text (584 KB) | HTML Full-text | XML Full-text
Abstract
We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, [...] Read more.
We study the localization properties of the eigenvectors, characterized by their information entropy, of tight-binding random networks with balanced losses and gain. The random network model, which is based on Erdős–Rényi (ER) graphs, is defined by three parameters: the network size N, the network connectivity α , and the losses-and-gain strength γ . Here, N and α are the standard parameters of ER graphs, while we introduce losses and gain by including complex self-loops on all vertices with the imaginary amplitude i γ with random balanced signs, thus breaking the Hermiticity of the corresponding adjacency matrices and inducing complex spectra. By the use of extensive numerical simulations, we define a scaling parameter ξ ξ ( N , α , γ ) that fixes the localization properties of the eigenvectors of our random network model; such that, when ξ < 0.1 ( 10 < ξ ), the eigenvectors are localized (extended), while the localization-to-delocalization transition occurs for 0.1 < ξ < 10 . Moreover, to extend the applicability of our findings, we demonstrate that for fixed ξ , the spectral properties (characterized by the position of the eigenvalues on the complex plane) of our network model are also universal; i.e., they do not depend on the specific values of the network parameters. Full article
(This article belongs to the Special Issue Complex Networks from Information Measures)
Figures

Figure 1

Open AccessArticle Performance Analysis of a Proton Exchange Membrane Fuel Cell Based Syngas
Entropy 2019, 21(1), 85; https://doi.org/10.3390/e21010085
Received: 31 October 2018 / Revised: 24 December 2018 / Accepted: 5 January 2019 / Published: 18 January 2019
Viewed by 404 | PDF Full-text (1601 KB) | HTML Full-text | XML Full-text
Abstract
External chemical reactors for steam reforming and water gas shift reactions are needed for a proton exchange membrane (PEM) fuel cell system using syngas fuel. For the preheating of syngas and stable steam reforming reaction at 600 °C, residual hydrogen from a fuel [...] Read more.
External chemical reactors for steam reforming and water gas shift reactions are needed for a proton exchange membrane (PEM) fuel cell system using syngas fuel. For the preheating of syngas and stable steam reforming reaction at 600 °C, residual hydrogen from a fuel cell and a certain amount of additional syngas are burned. The combustion temperature is calculated and the molar ratio of the syngas into burner and steam reformer is determined. Based on thermodynamics and electrochemistry, the electric power density and energy conversion efficiency of a PEM fuel cell based syngas are expressed. The effects of the temperature, the hydrogen utilization factor at the anode, and the molar ratio of the syngas into burner and steam reformer on the performance of a PEM fuel cell are discussed. To achieve the maximum power density or efficiency, the key parameters are determined. This manuscript presents the detailed operating process of a PEM fuel cell, the allocation of the syngas for combustion and electric generation, and the feasibility of a PEM fuel cell using syngas. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics III)
Figures

Figure 1

Open AccessArticle Desalination Processes’ Efficiency and Future Roadmap
Entropy 2019, 21(1), 84; https://doi.org/10.3390/e21010084
Received: 6 December 2018 / Revised: 29 December 2018 / Accepted: 14 January 2019 / Published: 18 January 2019
Viewed by 444 | PDF Full-text (3363 KB) | HTML Full-text | XML Full-text
Abstract
For future sustainable seawater desalination, the importance of achieving better energy efficiency of the existing 19,500 commercial-scale desalination plants cannot be over emphasized. The major concern of the desalination industry is the inadequate approach to energy efficiency evaluation of diverse seawater desalination processes [...] Read more.
For future sustainable seawater desalination, the importance of achieving better energy efficiency of the existing 19,500 commercial-scale desalination plants cannot be over emphasized. The major concern of the desalination industry is the inadequate approach to energy efficiency evaluation of diverse seawater desalination processes by omitting the grade of energy supplied. These conventional approaches would suffice if the efficacy comparison were to be conducted for the same energy input processes. The misconception of considering all derived energies as equivalent in the desalination industry has severe economic and environmental consequences. In the realms of the energy and desalination system planners, serious judgmental errors in the process selection of green installations are made unconsciously as the efficacy data are either flawed or inaccurate. Inferior efficacy technologies’ implementation decisions were observed in many water-stressed countries that can burden a country’s economy immediately with higher unit energy cost as well as cause more undesirable environmental effects on the surroundings. In this article, a standard primary energy-based thermodynamic framework is presented that addresses energy efficacy fairly and accurately. It shows clearly that a thermally driven process consumes 2.5–3% of standard primary energy (SPE) when combined with power plants. A standard universal performance ratio-based evaluation method has been proposed that showed all desalination processes performance varies from 10–14% of the thermodynamic limit. To achieve 2030 sustainability goals, innovative processes are required to meet 25–30% of the thermodynamic limit. Full article
Figures

Figure 1

Open AccessArticle Reconstruction of PET Images Using Cross-Entropy and Field of Experts
Entropy 2019, 21(1), 83; https://doi.org/10.3390/e21010083
Received: 17 December 2018 / Revised: 7 January 2019 / Accepted: 14 January 2019 / Published: 18 January 2019
Viewed by 390 | PDF Full-text (598 KB) | HTML Full-text | XML Full-text
Abstract
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In [...] Read more.
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In this paper, we propose the use of a field of experts to model a priori structure and capture anatomical spatial dependencies of the PET images to address the problems of noise and low count data, which make the reconstruction of the image difficult. We reconstruct PET images by using a modified MXE algorithm, which minimizes a objective function with the cross-entropy as a fidelity term, while the field of expert model is incorporated as a regularizing term. Comparisons with the expectation maximization algorithm and a iterative method with a prior penalizing relative differences showed that the proposed method can lead to accurate estimation of the image, especially with acquisitions at low count rate. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle Approximating Ground States by Neural Network Quantum States
Entropy 2019, 21(1), 82; https://doi.org/10.3390/e21010082
Received: 16 December 2018 / Revised: 6 January 2019 / Accepted: 16 January 2019 / Published: 17 January 2019
Viewed by 429 | PDF Full-text (403 KB) | HTML Full-text | XML Full-text
Abstract
Motivated by the Carleo’s work (Science, 2017, 355: 602), we focus on finding the neural network quantum statesapproximation of the unknown ground state of a given Hamiltonian H in terms of the best relative error and explore the influences of sum, tensor product, [...] Read more.
Motivated by the Carleo’s work (Science, 2017, 355: 602), we focus on finding the neural network quantum statesapproximation of the unknown ground state of a given Hamiltonian H in terms of the best relative error and explore the influences of sum, tensor product, local unitary of Hamiltonians on the best relative error. Besides, we illustrate our method with some examples. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Partial Discharge Fault Diagnosis Based on Multi-Scale Dispersion Entropy and a Hypersphere Multiclass Support Vector Machine
Entropy 2019, 21(1), 81; https://doi.org/10.3390/e21010081
Received: 24 December 2018 / Revised: 9 January 2019 / Accepted: 15 January 2019 / Published: 17 January 2019
Viewed by 401 | PDF Full-text (5969 KB) | HTML Full-text | XML Full-text
Abstract
Partial discharge (PD) fault analysis is an important tool for insulation condition diagnosis of electrical equipment. In order to conquer the limitations of traditional PD fault diagnosis, a novel feature extraction approach based on variational mode decomposition (VMD) and multi-scale dispersion entropy (MDE) [...] Read more.
Partial discharge (PD) fault analysis is an important tool for insulation condition diagnosis of electrical equipment. In order to conquer the limitations of traditional PD fault diagnosis, a novel feature extraction approach based on variational mode decomposition (VMD) and multi-scale dispersion entropy (MDE) is proposed. Besides, a hypersphere multiclass support vector machine (HMSVM) is used for PD pattern recognition with extracted PD features. Firstly, the original PD signal is decomposed with VMD to obtain intrinsic mode functions (IMFs). Secondly proper IMFs are selected according to central frequency observation and MDE values in each IMF are calculated. And then principal component analysis (PCA) is introduced to extract effective principle components in MDE. Finally, the extracted principle factors are used as PD features and sent to HMSVM classifier. Experiment results demonstrate that, PD feature extraction method based on VMD-MDE can extract effective characteristic parameters that representing dominant PD features. Recognition results verify the effectiveness and superiority of the proposed PD fault diagnosis method. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Figures

Figure 1

Open AccessFeature PaperArticle Efficient High-Dimensional Quantum Key Distribution with Hybrid Encoding
Entropy 2019, 21(1), 80; https://doi.org/10.3390/e21010080
Received: 26 December 2018 / Revised: 11 January 2019 / Accepted: 14 January 2019 / Published: 17 January 2019
Viewed by 517 | PDF Full-text (626 KB) | HTML Full-text | XML Full-text
Abstract
We propose a schematic setup of quantum key distribution (QKD) with an improved secret key rate based on high-dimensional quantum states. Two degrees-of-freedom of a single photon, orbital angular momentum modes, and multi-path modes, are used to encode secret key information. Its practical [...] Read more.
We propose a schematic setup of quantum key distribution (QKD) with an improved secret key rate based on high-dimensional quantum states. Two degrees-of-freedom of a single photon, orbital angular momentum modes, and multi-path modes, are used to encode secret key information. Its practical implementation consists of optical elements that are within the reach of current technologies such as a multiport interferometer. We show that the proposed feasible protocol has improved the secret key rate with much sophistication compared to the previous 2-dimensional protocol known as the detector-device-independent QKD. Full article
(This article belongs to the Special Issue Entropic Uncertainty Relations and Their Applications)
Figures

Figure 1

Open AccessArticle Quaternion Entropy for Analysis of Gait Data
Entropy 2019, 21(1), 79; https://doi.org/10.3390/e21010079
Received: 9 December 2018 / Revised: 9 January 2019 / Accepted: 15 January 2019 / Published: 17 January 2019
Viewed by 482 | PDF Full-text (918 KB) | HTML Full-text | XML Full-text
Abstract
Nonlinear dynamical analysis is a powerful approach to understanding biological systems. One of the most used metrics of system complexities is the Kolmogorov entropy. Long input signals without noise are required for the calculation, which are very hard to obtain in real situations. [...] Read more.
Nonlinear dynamical analysis is a powerful approach to understanding biological systems. One of the most used metrics of system complexities is the Kolmogorov entropy. Long input signals without noise are required for the calculation, which are very hard to obtain in real situations. Techniques allowing the estimation of entropy directly from time signals are statistics like approximate and sample entropy. Based on that, the new measurement for quaternion signal is introduced. This work presents an example of application of a nonlinear time series analysis by using the new quaternion, approximate entropy to analyse human gait kinematic data. The quaternion entropy was applied to analyse the quaternion signal which represents the segments orientations in time during the human gait. The research was aimed at the assessment of the influence of both walking speed and ground slope on the gait control during treadmill walking. Gait data was obtained by the optical motion capture system. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Figures

Figure 1

Open AccessArticle Uncertainty Assessment of Hyperspectral Image Classification: Deep Learning vs. Random Forest
Entropy 2019, 21(1), 78; https://doi.org/10.3390/e21010078
Received: 16 December 2018 / Revised: 10 January 2019 / Accepted: 10 January 2019 / Published: 16 January 2019
Viewed by 542 | PDF Full-text (3969 KB) | HTML Full-text | XML Full-text
Abstract
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test [...] Read more.
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test data but also incapable of capturing the spatial variation in classification error. Here, we apply and compare two uncertainty assessment techniques that do not rely on test data availability and enable the spatial characterisation of classification accuracy before the validation phase, promoting the assessment of error propagation within the classified imagery products. We compared the performance of emerging deep neural network (DNN) with the popular random forest (RF) technique. Uncertainty assessment was implemented by calculating the Shannon entropy of class probabilities predicted by DNN and RF for every pixel. The classification uncertainties of DNN and RF were quantified for two different hyperspectral image datasets—Salinas and Indian Pines. We then compared the uncertainty against the classification accuracy of the techniques represented by a modified root mean square error (RMSE). The results indicate that considering modified RMSE values for various sample sizes of both datasets, the derived entropy based on the DNN algorithm is a better estimate of classification accuracy and hence provides a superior uncertainty estimate at the pixel level. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle Logical Structures Underlying Quantum Computing
Entropy 2019, 21(1), 77; https://doi.org/10.3390/e21010077
Received: 24 November 2018 / Revised: 12 January 2019 / Accepted: 12 January 2019 / Published: 16 January 2019
Viewed by 478 | PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
In this work we advance a generalization of quantum computational logics capable of dealing with some important examples of quantum algorithms. We outline an algebraic axiomatization of these structures. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Open AccessArticle Dynamical and Coupling Structure of Pulse-Coupled Networks in Maximum Entropy Analysis
Entropy 2019, 21(1), 76; https://doi.org/10.3390/e21010076
Received: 5 November 2018 / Revised: 16 December 2018 / Accepted: 9 January 2019 / Published: 16 January 2019
Viewed by 407 | PDF Full-text (892 KB) | HTML Full-text | XML Full-text
Abstract
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions [...] Read more.
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions in MEP analysis, and the anatomical coupling structure of pulse-coupled networks and it helps to understand how a sparse coupling structure could lead to a sparse coding by effective interactions. This relation quantitatively displays how the dynamical structure is closely related to the anatomical coupling structure. Full article
Figures

Figure 1

Open AccessArticle Effects of Silicon Content on the Microstructures and Mechanical Properties of (AlCrTiZrV)-Six-N High-Entropy Alloy Films
Entropy 2019, 21(1), 75; https://doi.org/10.3390/e21010075
Received: 4 December 2018 / Revised: 4 January 2019 / Accepted: 11 January 2019 / Published: 16 January 2019
Viewed by 454 | PDF Full-text (4743 KB) | HTML Full-text | XML Full-text
Abstract
A series of (AlCrTiZrV)-Six-N films with different silicon contents were deposited on monocrystalline silicon substrates by direct-current (DC) magnetron sputtering. The films were characterized by the X-ray diffractometry (XRD), scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HRTEM), and nano-indentation techniques. [...] Read more.
A series of (AlCrTiZrV)-Six-N films with different silicon contents were deposited on monocrystalline silicon substrates by direct-current (DC) magnetron sputtering. The films were characterized by the X-ray diffractometry (XRD), scanning electron microscopy (SEM), high-resolution transmission electron microscopy (HRTEM), and nano-indentation techniques. The effects of the silicon content on the microstructures and mechanical properties of the films were investigated. The experimental results show that the (AlCrTiZrV)N films grow in columnar grains and present a (200) preferential growth orientation. The addition of the silicon element leads to the disappearance of the (200) peak, and the grain refinement of the (AlCrTiZrV)-Six-N films. Meanwhile, the reticular amorphous phase is formed, thus developing the nanocomposite structure with the nanocrystalline structures encapsulated by the amorphous phase. With the increase of the silicon content, the mechanical properties first increase and then decrease. The maximal hardness and modulus of the film reach 34.3 GPa and 301.5 GPa, respectively, with the silicon content (x) of 8% (volume percent). The strengthening effect of the (AlCrTiZrV)-Six-N film can be mainly attributed to the formation of the nanocomposite structure. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Thermodynamic Analysis of Entropy Generation Minimization in Thermally Dissipating Flow Over a Thin Needle Moving in a Parallel Free Stream of Two Newtonian Fluids
Entropy 2019, 21(1), 74; https://doi.org/10.3390/e21010074
Received: 21 November 2018 / Revised: 25 December 2018 / Accepted: 28 December 2018 / Published: 16 January 2019
Viewed by 415 | PDF Full-text (2022 KB) | HTML Full-text | XML Full-text
Abstract
This article is devoted to study sustainability of entropy generation in an incompressible thermal flow of Newtonian fluids over a thin needle that is moving in a parallel stream. Two types of Newtonian fluids (water and air) are considered in this work. The [...] Read more.
This article is devoted to study sustainability of entropy generation in an incompressible thermal flow of Newtonian fluids over a thin needle that is moving in a parallel stream. Two types of Newtonian fluids (water and air) are considered in this work. The energy dissipation term is included in the energy equation. Here, it is presumed that u (the free stream velocity) is in the positive axial direction (x-axis) and the motion of the thin needle is in the opposite or similar direction as the free stream velocity. The reduced self-similar governing equations are solved numerically with the aid of the shooting technique with the fourth-order-Runge-Kutta method. Using similarity transformations, it is possible to obtain the expression for dimensionless form of the volumetric entropy generation rate and the Bejan number. The effects of Prandtl number, Eckert number and dimensionless temperature parameter are discussed graphically in details for water and air taken as Newtonian fluids. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Figures

Figure 1

Open AccessArticle Negation of Belief Function Based on the Total Uncertainty Measure
Entropy 2019, 21(1), 73; https://doi.org/10.3390/e21010073
Received: 9 November 2018 / Revised: 5 January 2019 / Accepted: 11 January 2019 / Published: 15 January 2019
Viewed by 435 | PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure [...] Read more.
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure is proposed in this paper. The uncertainty of non-singleton elements in the power set is taken into account. Compared with the negation method of a probability distribution, the proposed negation method of BPA differs becausethe BPA of a certain element is reassigned to the other elements in the power set where the weight of reassignment is proportional to the cardinality of intersection of the element and each remaining element in the power set. Notably, the proposed negation method of BPA reduces to the negation of probability distribution as BPA reduces to classical probability. Furthermore, it is proved mathematically that our proposed negation method of BPA is indeed based on the maximum uncertainty. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range ofSignal-to-Noise Ratio
Entropy 2019, 21(1), 72; https://doi.org/10.3390/e21010072
Received: 10 December 2018 / Revised: 9 January 2019 / Accepted: 14 January 2019 / Published: 15 January 2019
Viewed by 383 | PDF Full-text (321 KB) | HTML Full-text | XML Full-text
Abstract
In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of [...] Read more.
In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of these algorithms is the heavy computational burden involved in calculating the expression for the conditional expectation. In addition, none of these techniques are applicable for signal-to-noise ratios lower than 7 dB. In this paper, I propose a new closed-form approximated expression for the conditional expectation based on a previously obtained expression where the equalized output probability density function is calculated via the approximated input probability density function which itself is approximated with the maximum entropy density approximation technique. This newly proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation based on the maximum entropy approximation technique. The simulation results indicate that the newly proposed algorithm with the newly proposed Lagrange multipliers is suitable for signal-to-noise ratio values down to 0 dB and has an improved equalization performance from the residual inter-symbol-interference point of view compared to the previously obtained algorithms based on the conditional expectation obtained via the maximum entropy technique. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Entropy in 2018
Entropy 2019, 21(1), 71; https://doi.org/10.3390/e21010071
Published: 15 January 2019
Viewed by 480 | PDF Full-text (308 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle The Effect of Cognitive Resource Competition Due to Dual-Tasking on the Irregularity and Control of Postural Movement Components
Entropy 2019, 21(1), 70; https://doi.org/10.3390/e21010070
Received: 14 December 2018 / Revised: 4 January 2019 / Accepted: 8 January 2019 / Published: 15 January 2019
Viewed by 480 | PDF Full-text (1460 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found [...] Read more.
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found in measures of sway irregularity, we hypothesized (H1) that the irregularity of PMs and their respective control, and the control tightness will display the n-shape. Furthermore, according to the minimal intervention principle (H2) different PMs should be affected differently. Finally, (H3) we expected stronger dual-tasking effects in the older population, due to limited cognitive resources. We measured the kinematics of forty-one healthy volunteers (23 aged 26 ± 3; 18 aged 59 ± 4) performing 80 s tandem stances in five conditions (single-task and auditory n-back task; n = 1–4), and computed sample entropies on PM time-series and two novel measures of control tightness. In the PM most critical for stability, the control tightness decreased steadily, and in contrast to H3, decreased further for the younger group. Nevertheless, we found n-shapes in most variables with differing magnitudes, supporting H1 and H2. These results suggest that the control tightness might deteriorate steadily with increased cognitive load in critical movements despite the otherwise eminent n-shaped relationship. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top