Next Issue
Volume 22, July
Previous Issue
Volume 22, May

Table of Contents

Entropy, Volume 22, Issue 6 (June 2020) – 117 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The Coding Theorem Method (CTM) and Block Decomposition Method (BDM) have been serving the [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Generalized Entropies, Variance and Applications
Entropy 2020, 22(6), 709; https://doi.org/10.3390/e22060709 - 26 Jun 2020
Viewed by 431
Abstract
The generalized cumulative residual entropy is a recently defined dispersion measure. In this paper, we obtain some further results for such a measure, in relation to the generalized cumulative residual entropy and the variance of random lifetimes. We show that it has an [...] Read more.
The generalized cumulative residual entropy is a recently defined dispersion measure. In this paper, we obtain some further results for such a measure, in relation to the generalized cumulative residual entropy and the variance of random lifetimes. We show that it has an intimate connection with the non-homogeneous Poisson process. We also get new expressions, bounds and stochastic comparisons involving such measures. Moreover, the dynamic version of the mentioned notions is studied through the residual lifetimes and suitable aging notions. In this framework we achieve some findings of interest in reliability theory, such as a characterization for the exponential distribution, various results on k-out-of-n systems, and a connection to the excess wealth order. We also obtain similar results for the generalized cumulative entropy, which is a dual measure to the generalized cumulative residual entropy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
A Novel Technique for Achieving the Approximated ISI at the Receiver for a 16QAM Signal Sent via a FIR Channel Based Only on the Received Information and Statistical Techniques
Entropy 2020, 22(6), 708; https://doi.org/10.3390/e22060708 - 26 Jun 2020
Viewed by 356
Abstract
A single-input-multiple-output (SIMO) channel is obtained from the use of an array of antennas in the receiver where the same information is transmitted through different sub-channels, and all received sequences are distinctly distorted versions of the same message. The inter-symbol-interference (ISI) level from [...] Read more.
A single-input-multiple-output (SIMO) channel is obtained from the use of an array of antennas in the receiver where the same information is transmitted through different sub-channels, and all received sequences are distinctly distorted versions of the same message. The inter-symbol-interference (ISI) level from each sub-channel is presently unknown to the receiver. Thus, even when one or more sub-channels cause heavy ISI, all the information from all the sub-channels was still considered in the receiver. Obviously, if we know the approximated ISI of each sub-channel, we will use in the receiver only those sub-channels with the lowest ISI level to get improved system performance. In this paper, we present a systematic way for obtaining the approximated ISI from each sub-channel modelled as a finite-impulse-response (FIR) channel with real-valued coefficients for a 16QAM (16 quadrature amplitude modulation) source signal transmission. The approximated ISI is based on the maximum entropy density approximation technique, on the Edgeworth expansion up to order six, on the Laplace integral method and on the generalized Gaussian distribution (GGD). Although the approximated ISI was derived for the noiseless case, it was successfully tested for signal to noise ratio (SNR) down to 20 dB. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

Open AccessArticle
Some Useful Integral Representations for Information-Theoretic Analyses
Entropy 2020, 22(6), 707; https://doi.org/10.3390/e22060707 - 26 Jun 2020
Viewed by 425
Abstract
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas for quantities that involve expectations of the logarithm of [...] Read more.
This work is an extension of our earlier article, where a well-known integral representation of the logarithmic function was explored and was accompanied with demonstrations of its usefulness in obtaining compact, easily-calculable, exact formulas for quantities that involve expectations of the logarithm of a positive random variable. Here, in the same spirit, we derive an exact integral representation (in one or two dimensions) of the moment of a nonnegative random variable, or the sum of such independent random variables, where the moment order is a general positive non-integer real (also known as fractional moments). The proposed formula is applied to a variety of examples with an information-theoretic motivation, and it is shown how it facilitates their numerical evaluations. In particular, when applied to the calculation of a moment of the sum of a large number, n, of nonnegative random variables, it is clear that integration over one or two dimensions, as suggested by our proposed integral representation, is significantly easier than the alternative of integrating over n dimensions, as needed in the direct calculation of the desired moment. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessArticle
Optimization of the Casualties’ Treatment Process: Blended Military Experiment
Entropy 2020, 22(6), 706; https://doi.org/10.3390/e22060706 - 25 Jun 2020
Viewed by 352
Abstract
At the battalion level, NATO ROLE1 medical treatment command focuses on the provision of primary health care being the very first physician and higher medical equipment intervention for casualty treatments. ROLE1 has paramount importance in casualty reductions, representing a complex system in current [...] Read more.
At the battalion level, NATO ROLE1 medical treatment command focuses on the provision of primary health care being the very first physician and higher medical equipment intervention for casualty treatments. ROLE1 has paramount importance in casualty reductions, representing a complex system in current operations. This study deals with an experiment on the optimization of ROLE1 according to the key parameters of the numbers of physicians, the number of ambulances and the distance between ROLE1 and the current battlefield. The very first step in this study is to design and implement a model of current battlefield casualties. The model uses friction data generated from an already executed computer assisted exercise (CAX) while employing a constructive simulation to produce offense and defense scenarios on the flow of casualties. The next step in the study is to design and implement a model representing the transportation to ROLE1, its structure and behavior. The deterministic model of ROLE1, employing a system dynamics simulation paradigm, uses the previously generated casualty flows as the inputs representing human decision-making processes through the recorder CAX events. A factorial experimental design for the ROLE1 model revealed the recommended variants of the ROLE1 structure for both offensive and defensive operations. The overall recommendation is for the internal structure of ROLE1 to have three ambulances and three physicians for any kind of current operation and any distance between ROLE1 and the current battlefield within the limit of 20 min. This study provides novelty in the methodology of casualty estimations involving human decision-making factors as well as the optimization of medical treatment processes through experimentation with the process model. Full article
Show Figures

Figure 1

Open AccessArticle
Sharp Second-Order Pointwise Asymptotics for Lossless Compression with Side Information
Entropy 2020, 22(6), 705; https://doi.org/10.3390/e22060705 - 25 Jun 2020
Viewed by 325
Abstract
The problem of determining the best achievable performance of arbitrary lossless compression algorithms is examined, when correlated side information is available at both the encoder and decoder. For arbitrary source-side information pairs, the conditional information density is shown to provide a sharp asymptotic [...] Read more.
The problem of determining the best achievable performance of arbitrary lossless compression algorithms is examined, when correlated side information is available at both the encoder and decoder. For arbitrary source-side information pairs, the conditional information density is shown to provide a sharp asymptotic lower bound for the description lengths achieved by an arbitrary sequence of compressors. This implies that for ergodic source-side information pairs, the conditional entropy rate is the best achievable asymptotic lower bound to the rate, not just in expectation but with probability one. Under appropriate mixing conditions, a central limit theorem and a law of the iterated logarithm are proved, describing the inevitable fluctuations of the second-order asymptotically best possible rate. An idealised version of Lempel-Ziv coding with side information is shown to be universally first- and second-order asymptotically optimal, under the same conditions. These results are in part based on a new almost-sure invariance principle for the conditional information density, which may be of independent interest. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Vibrations in CDFW
Entropy 2020, 22(6), 704; https://doi.org/10.3390/e22060704 - 24 Jun 2020
Viewed by 389
Abstract
Continuous drive friction welding is a solid-state welding process that has been experimentally proven to be a fast and reliable method. This is a complex process; deformations in the viscosity of a material alter the friction between the surfaces of the pieces. All [...] Read more.
Continuous drive friction welding is a solid-state welding process that has been experimentally proven to be a fast and reliable method. This is a complex process; deformations in the viscosity of a material alter the friction between the surfaces of the pieces. All these dynamics cause changes in the vibration signals; the interpretation of these signals can reveal important information. The vibration signals generated during the friction and forging stages are measured on the stationary part of the structure to determine the influence of the manipulated variables on the time domain statistical characteristics (root mean square, peak value, crest factor, and kurtosis). In the frequency domain, empirical mode decomposition is used to characterize frequencies. It was observed that it is possible to identify the effects of the manipulated variables on the calculated statistical characteristics. The results also indicate that the effect of manipulated variables is stronger on low-frequency signals. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Graphical abstract

Open AccessArticle
Entropy-Based Estimation of Event-Related De/Synchronization in Motor Imagery Using Vector-Quantized Patterns
Entropy 2020, 22(6), 703; https://doi.org/10.3390/e22060703 - 24 Jun 2020
Viewed by 326
Abstract
Assessment of brain dynamics elicited by motor imagery (MI) tasks contributes to clinical and learning applications. In this regard, Event-Related Desynchronization/Synchronization (ERD/S) is computed from Electroencephalographic signals, which show considerable variations in complexity. We present an Entropy-based method, termed VQEnt, for estimation [...] Read more.
Assessment of brain dynamics elicited by motor imagery (MI) tasks contributes to clinical and learning applications. In this regard, Event-Related Desynchronization/Synchronization (ERD/S) is computed from Electroencephalographic signals, which show considerable variations in complexity. We present an Entropy-based method, termed VQEnt, for estimation of ERD/S using quantized stochastic patterns as a symbolic space, aiming to improve their discriminability and physiological interpretability. The proposed method builds the probabilistic priors by assessing the Gaussian similarity between the input measured data and their reduced vector-quantized representation. The validating results of a bi-class imagine task database (left and right hand) prove that VQEnt holds symbols that encode several neighboring samples, providing similar or even better accuracy than the other baseline sample-based algorithms of Entropy estimation. Besides, the performed ERD/S time-series are close enough to the trajectories extracted by the variational percentage of EEG signal power and fulfill the physiological MI paradigm. In BCI literate individuals, the VQEnt estimator presents the most accurate outcomes at a lower amount of electrodes placed in the sensorimotor cortex so that reduced channel set directly involved with the MI paradigm is enough to discriminate between tasks, providing an accuracy similar to the performed by the whole electrode set. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Exergy and Exergoeconomic Analysis of a Cogeneration Hybrid Solar Organic Rankine Cycle with Ejector
Entropy 2020, 22(6), 702; https://doi.org/10.3390/e22060702 - 24 Jun 2020
Viewed by 367
Abstract
Solar energy is utilized in a combined ejector refrigeration system with an organic Rankine cycle (ORC) to produce a cooling effect and generate electrical power. This study aims at increasing the utilized share of the collected solar thermal energy by inserting an ORC [...] Read more.
Solar energy is utilized in a combined ejector refrigeration system with an organic Rankine cycle (ORC) to produce a cooling effect and generate electrical power. This study aims at increasing the utilized share of the collected solar thermal energy by inserting an ORC into the system. As the ejector refrigeration cycle reaches its maximum coefficient of performance (COP), the ORC starts working and generating electrical power. This electricity is used to run the circulating pumps and the control system, which makes the system autonomous. For the ejector refrigeration system, R134a refrigerant is selected as the working fluid for its performance characteristics and environmentally friendly nature. The COP of 0.53 was obtained for the ejector refrigeration cycle. The combined cycle of the solar ejector refrigeration and ORC is modeled in EBSILON Professional. Different parameters like generator temperature and pressure, condenser temperature and pressure, and entrainment ratio are studied, and the effect of these parameters on the cycle COP is investigated. Exergy, economic, and exergoeconomic analyses of the hybrid system are carried out to identify the thermodynamic and cost inefficiencies present in various components of the system. Full article
(This article belongs to the Special Issue Energy Technology and Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
Entropy-Based Strategies for Rapid Pre-Processing and Classification of Time Series Data from Single-Molecule Force Experiments
Entropy 2020, 22(6), 701; https://doi.org/10.3390/e22060701 - 23 Jun 2020
Viewed by 379
Abstract
Recent advances in single-molecule science have revealed an astonishing number of details on the microscopic states of molecules, which in turn defined the need for simple, automated processing of numerous time-series data. In particular, large datasets of time series of single protein molecules [...] Read more.
Recent advances in single-molecule science have revealed an astonishing number of details on the microscopic states of molecules, which in turn defined the need for simple, automated processing of numerous time-series data. In particular, large datasets of time series of single protein molecules have been obtained using laser optical tweezers. In this system, each molecular state has a separate time series with a relatively uneven composition from the point of view-point of local descriptive statistics. In the past, uncertain data quality and heterogeneity of molecular states were biased to the human experience. Because the data processing information is not directly transferable to the black-box-framework for an efficient classification, a rapid evaluation of a large number of time series samples simultaneously measured may constitute a serious obstacle. To solve this particular problem, we have implemented a supervised learning method that combines local entropic models with the global Lehmer average. We find that the methodological combination is suitable to perform a fast and simple categorization, which enables rapid pre-processing of the data with minimal optimization and user interventions. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Optimized Piston Motion for an Alpha-Type Stirling Engine
Entropy 2020, 22(6), 700; https://doi.org/10.3390/e22060700 - 23 Jun 2020
Viewed by 359
Abstract
The Stirling engine is one of the most promising devices for the recovery of waste heat. Its power output can be optimized by several means, in particular by an optimized piston motion. Here, we investigate its potential performance improvements in the presence of [...] Read more.
The Stirling engine is one of the most promising devices for the recovery of waste heat. Its power output can be optimized by several means, in particular by an optimized piston motion. Here, we investigate its potential performance improvements in the presence of dissipative processes. In order to ensure the possibility of a technical implementation and the simplicity of the optimization, we restrict the possible piston movements to a parametrized class of smooth piston motions. In this theoretical study the engine model is based on endoreversible thermodynamics, which allows us to incorporate non-equilibrium heat and mass transfer as well as the friction of the piston motion. The regenerator of the Stirling engine is modeled as ideal. An investigation of the impact of the individual loss mechanisms on the resulting optimized motion is carried out for a wide range of parameter values. We find that an optimization within our restricted piston motion class leads to a power gain of about 50% on average. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
Risk Evaluation for a Manufacturing Process Based on a Directed Weighted Network
Entropy 2020, 22(6), 699; https://doi.org/10.3390/e22060699 - 23 Jun 2020
Viewed by 350
Abstract
The quality of a manufacturing process can be represented by the complex coupling relationship between quality characteristics, which is defined by the directed weighted network to evaluate the risk of the manufacturing process. A multistage manufacturing process model is established to extract the [...] Read more.
The quality of a manufacturing process can be represented by the complex coupling relationship between quality characteristics, which is defined by the directed weighted network to evaluate the risk of the manufacturing process. A multistage manufacturing process model is established to extract the quality information, and the quality characteristics of each process are mapped to nodes of the network. The mixed embedded partial conditional mutual information (PMIME) is used to analyze the causal effect between quality characteristics, wherein the causal relationships are mapped as the directed edges, while the magnitudes of the causal effects are defined as the weight of edges. The node centrality is measured based on information entropy theory, and the influence of a node is divided into two parts, which are local and indirect effects. Moreover, the entropy value of the directed weighted network is determined according to the weighted average of the centrality of the nodes, and this value is defined as the risk of the manufacturing process. Finally, the method is verified through a public dataset. Full article
Show Figures

Figure 1

Open AccessArticle
Modelling Consciousness within Mental Monism: An Automata-Theoretic Approach
Entropy 2020, 22(6), 698; https://doi.org/10.3390/e22060698 - 22 Jun 2020
Viewed by 570
Abstract
Models of consciousness are usually developed within physical monist or dualistic frameworks, in which the structure and dynamics of the mind are derived from the workings of the physical brain. Little attention has been given to modelling consciousness within a mental monist framework, [...] Read more.
Models of consciousness are usually developed within physical monist or dualistic frameworks, in which the structure and dynamics of the mind are derived from the workings of the physical brain. Little attention has been given to modelling consciousness within a mental monist framework, deriving the structure and dynamics of the mental world from primitive mental constituents only—with no neural substrate. Mental monism is gaining attention as a candidate solution to Chalmers’ Hard Problem on philosophical grounds, and it is therefore timely to examine possible formal models of consciousness within it. Here, I argue that the austere ontology of mental monism places certain constraints on possible models of consciousness, and propose a minimal set of hypotheses that a model of consciousness (within mental monism) should respect. From those hypotheses, it would be possible to construct many formal models that permit universal computation in the mental world, through cellular automata. We need further hypotheses to define transition rules for particular models, and I propose a transition rule with the unusual property of deep copying in the time dimension. Full article
(This article belongs to the Special Issue Models of Consciousness)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Large Deviations for Continuous Time Random Walks
Entropy 2020, 22(6), 697; https://doi.org/10.3390/e22060697 - 22 Jun 2020
Viewed by 418
Abstract
Recently observation of random walks in complex environments like the cell and other glassy systems revealed that the spreading of particles, at its tails, follows a spatial exponential decay instead of the canonical Gaussian. We use the widely applicable continuous time random walk [...] Read more.
Recently observation of random walks in complex environments like the cell and other glassy systems revealed that the spreading of particles, at its tails, follows a spatial exponential decay instead of the canonical Gaussian. We use the widely applicable continuous time random walk model and obtain the large deviation description of the propagator. Under mild conditions that the microscopic jump lengths distribution is decaying exponentially or faster i.e., Lévy like power law distributed jump lengths are excluded, and that the distribution of the waiting times is analytical for short waiting times, the spreading of particles follows an exponential decay at large distances, with a logarithmic correction. Here we show how anti-bunching of jump events reduces the effect, while bunching and intermittency enhances it. We employ exact solutions of the continuous time random walk model to test the large deviation theory. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Quantum-Gravity Stochastic Effects on the de Sitter Event Horizon
Entropy 2020, 22(6), 696; https://doi.org/10.3390/e22060696 - 22 Jun 2020
Viewed by 436
Abstract
The stochastic character of the cosmological constant arising from the non-linear quantum-vacuum Bohm interaction in the framework of the manifestly-covariant theory of quantum gravity (CQG theory) is pointed out. This feature is shown to be consistent with the axiomatic formulation of quantum gravity [...] Read more.
The stochastic character of the cosmological constant arising from the non-linear quantum-vacuum Bohm interaction in the framework of the manifestly-covariant theory of quantum gravity (CQG theory) is pointed out. This feature is shown to be consistent with the axiomatic formulation of quantum gravity based on the hydrodynamic representation of the same CQG theory developed recently. The conclusion follows by investigating the indeterminacy properties of the probability density function and its representation associated with the quantum gravity state, which corresponds to a hydrodynamic continuity equation that satisfies the unitarity principle. As a result, the corresponding form of stochastic quantum-modified Einstein field equations is obtained and shown to admit a stochastic cosmological de Sitter solution for the space-time metric tensor. The analytical calculation of the stochastic averages of relevant physical observables is obtained. These include in particular the radius of the de Sitter sphere fixing the location of the event horizon and the expression of the Hawking temperature associated with the related particle tunneling effect. Theoretical implications for cosmology and field theories are pointed out. Full article
(This article belongs to the Special Issue Stochastic and Hydrodynamic Approaches to Quantum Mechanics)
Open AccessArticle
Gait Recognition Method of Underground Coal Mine Personnel Based on Densely Connected Convolution Network and Stacked Convolutional Autoencoder
Entropy 2020, 22(6), 695; https://doi.org/10.3390/e22060695 - 21 Jun 2020
Viewed by 476
Abstract
Biological recognition methods often use biological characteristics such as the human face, iris, fingerprint, and palm print; however, such images often become blurred under the limitation of the complex environment of the underground, which leads to low identification rates of underground coal mine [...] Read more.
Biological recognition methods often use biological characteristics such as the human face, iris, fingerprint, and palm print; however, such images often become blurred under the limitation of the complex environment of the underground, which leads to low identification rates of underground coal mine personnel. A gait recognition method via similarity learning named Two-Stream neural network (TS-Net) is proposed based on a densely connected convolution network (DenseNet) and stacked convolutional autoencoder (SCAE). The mainstream network based on DenseNet is mainly used to learn the similarity of dynamic deep features containing spatiotemporal information in the gait pattern. The auxiliary stream network based on SCAE is used to learn the similarity of static invariant features containing physiological information. Moreover, a novel feature fusion method is adopted to achieve the fusion and representation of dynamic and static features. The extracted features are robust to angle, clothing, miner hats, waterproof shoes, and carrying conditions. The method was evaluated on the challenging CASIA-B gait dataset and the collected gait dataset of underground coal mine personnel (UCMP-GAIT). Experimental results show that the method is effective and feasible for the gait recognition of underground coal mine personnel. Besides, compared with other gait recognition methods, the recognition accuracy has been significantly improved. Full article
Show Figures

Figure 1

Open AccessArticle
Relative Consistency of Sample Entropy Is Not Preserved in MIX Processes
Entropy 2020, 22(6), 694; https://doi.org/10.3390/e22060694 - 21 Jun 2020
Viewed by 382
Abstract
Relative consistency is a notion related to entropic parameters, most notably to Approximate Entropy and Sample Entropy. It is a central characteristic assumed for e.g., biomedical and economic time series, since it allows the comparison between different time series at a single value [...] Read more.
Relative consistency is a notion related to entropic parameters, most notably to Approximate Entropy and Sample Entropy. It is a central characteristic assumed for e.g., biomedical and economic time series, since it allows the comparison between different time series at a single value of the threshold parameter r. There is no formal proof for this property, yet it is generally accepted that it is true. Relative consistency in both Approximate Entropy and Sample entropy was first tested with the M I X process. In the seminal paper by Richman and Moorman, it was shown that Approximate Entropy lacked the property for cases in which Sample Entropy did not. In the present paper, we show that relative consistency is not preserved for M I X processes if enough noise is added, yet it is preserved for another process for which we define a sum of a sinusoidal and a stochastic element, no matter how much noise is present. The analysis presented in this paper is only possible because of the existence of the very fast NCM algorithm for calculating correlation sums and thus also Sample Entropy. Full article
Show Figures

Figure 1

Open AccessArticle
Quantum Photovoltaic Cells Driven by Photon Pulses
Entropy 2020, 22(6), 693; https://doi.org/10.3390/e22060693 - 20 Jun 2020
Viewed by 497
Abstract
We investigate the quantum thermodynamics of two quantum systems, a two-level system and a four-level quantum photocell, each driven by photon pulses as a quantum heat engine. We set these systems to be in thermal contact only with a cold reservoir while the [...] Read more.
We investigate the quantum thermodynamics of two quantum systems, a two-level system and a four-level quantum photocell, each driven by photon pulses as a quantum heat engine. We set these systems to be in thermal contact only with a cold reservoir while the heat (energy) source, conventionally given from a hot thermal reservoir, is supplied by a sequence of photon pulses. The dynamics of each system is governed by a coherent interaction due to photon pulses in terms of the Jaynes-Cummings Hamiltonian together with the system-bath interaction described by the Lindblad master equation. We calculate the thermodynamic quantities for the two-level system and the quantum photocell including the change in system energy, the power delivered by photon pulses, the power output to an external load, the heat dissipated to a cold bath, and the entropy production. We thereby demonstrate how a quantum photocell in the cold bath can operate as a continuum quantum heat engine with a sequence of photon pulses continuously applied. We specifically introduce the power efficiency of the quantum photocell in terms of the ratio of output power delivered to an external load with current and voltage to the input power delivered by the photon pulse. Our study indicates a possibility that a quantum system driven by external fields can act as an efficient quantum heat engine under non-equilibrium thermodynamics. Full article
Show Figures

Figure 1

Open AccessArticle
Electro-Osmotic Behavior of Polymeric Cation-Exchange Membranes in Ethanol-Water Solutions
Entropy 2020, 22(6), 692; https://doi.org/10.3390/e22060692 - 20 Jun 2020
Viewed by 390
Abstract
The aim of this work is to apply linear non-equilibrium thermodynamics to study the electrokinetic properties of three cation-exchange membranes of different structures in ethanol-water electrolyte solutions. To this end, liquid uptake and electro-osmotic permeability were estimated with potassium chloride ethanol-water solutions with [...] Read more.
The aim of this work is to apply linear non-equilibrium thermodynamics to study the electrokinetic properties of three cation-exchange membranes of different structures in ethanol-water electrolyte solutions. To this end, liquid uptake and electro-osmotic permeability were estimated with potassium chloride ethanol-water solutions with different ethanol proportions as solvent. Current–voltage curves were also measured for each membrane system to estimate the energy dissipation due to the Joule effect. Considering the Onsager reciprocity relations, the streaming potential coefficient was discussed in terms of ethanol content of the solutions and the membrane structure. The results showed that more porous heterogeneous membrane presented lower values of liquid uptake and streaming potential coefficient with increasing ethanol content. Denser homogeneous membrane showed higher values for both, solvent uptake and streaming coefficient for intermediate content of ethanol. Full article
(This article belongs to the Special Issue Thermodynamics of Materials)
Show Figures

Figure 1

Open AccessArticle
A New Belief Entropy in Dempster–Shafer Theory Based on Basic Probability Assignment and the Frame of Discernment
Entropy 2020, 22(6), 691; https://doi.org/10.3390/e22060691 - 20 Jun 2020
Viewed by 448
Abstract
Dempster–Shafer theory has been widely used in many applications, especially in the measurement of information uncertainty. However, under the D-S theory, how to use the belief entropy to measure the uncertainty is still an open issue. In this paper, we list some significant [...] Read more.
Dempster–Shafer theory has been widely used in many applications, especially in the measurement of information uncertainty. However, under the D-S theory, how to use the belief entropy to measure the uncertainty is still an open issue. In this paper, we list some significant properties. The main contribution of this paper is to propose a new entropy, for which some properties are discussed. Our new model has two components. The first is Nguyen entropy. The second component is the product of the cardinality of the frame of discernment (FOD) and Dubois entropy. In addition, under certain conditions, the new belief entropy can be transformed into Shannon entropy. Compared with the others, the new entropy considers the impact of FOD. Through some numerical examples and simulation, the proposed belief entropy is proven to be able to measure uncertainty accurately. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
An Upper Bound on the Error Induced by Saddlepoint Approximations—Applications to Information Theory
Entropy 2020, 22(6), 690; https://doi.org/10.3390/e22060690 - 20 Jun 2020
Viewed by 420
Abstract
This paper introduces an upper bound on the absolute difference between: ( a ) the cumulative distribution function (CDF) of the sum of a finite number of independent and identically distributed random variables with finite absolute third moment; and ( b ) a [...] Read more.
This paper introduces an upper bound on the absolute difference between: ( a ) the cumulative distribution function (CDF) of the sum of a finite number of independent and identically distributed random variables with finite absolute third moment; and ( b ) a saddlepoint approximation of such CDF. This upper bound, which is particularly precise in the regime of large deviations, is used to study the dependence testing (DT) bound and the meta converse (MC) bound on the decoding error probability (DEP) in point-to-point memoryless channels. Often, these bounds cannot be analytically calculated and thus lower and upper bounds become particularly useful. Within this context, the main results include, respectively, new upper and lower bounds on the DT and MC bounds. A numerical experimentation of these bounds is presented in the case of the binary symmetric channel, the additive white Gaussian noise channel, and the additive symmetric α -stable noise channel. Full article
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives)
Show Figures

Figure 1

Open AccessArticle
Exergoeconomic Analysis of Corn Drying in a Novel Industrial Drying System
Entropy 2020, 22(6), 689; https://doi.org/10.3390/e22060689 - 20 Jun 2020
Viewed by 397
Abstract
The improvement of the design and operation of energy conversion systems is a theme of global concern. As an energy intensive operation, industrial agricultural product drying has also attracted significant attention in recent years. Taking a novel industrial corn drying system with drying [...] Read more.
The improvement of the design and operation of energy conversion systems is a theme of global concern. As an energy intensive operation, industrial agricultural product drying has also attracted significant attention in recent years. Taking a novel industrial corn drying system with drying capacity of 5.5 t/h as a study case, based on existing exergoeconomic and exergetic analysis methodology, the present work investigated the exergetic and economic performance of the drying system and identified its energy use deficiencies. The results showed that the average drying rate for corn drying in the system is 1.98 gwater/gdry matter h. The average exergy rate for dehydrating the moisture from the corn kernel is 345.22 kW and the exergy efficiency of the drying chamber ranges from 14.81% to 40.10%. The average cost of producing 1 GJ exergy for removing water from wet corn kernels is USD 25.971, while the average cost of removing 1 kg water is USD 0.159. These results might help to further understand the drying process from the exergoeconomic perspective and aid formulation of a scientific index for agricultural product industrial drying. Additionally, the results also indicated that, from an energy perspective, the combustion chamber should be firstly optimized, while the drying chamber should be given priority from the exergoeconomics perspective. The main results would be helpful for further optimizing the drying process from both energetic and economic perspectives and provide new thinking about agricultural product industrial drying from the perspective of exergoeconomics. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
Automatic Detection of Depression in Speech Using Ensemble Convolutional Neural Networks
Entropy 2020, 22(6), 688; https://doi.org/10.3390/e22060688 - 20 Jun 2020
Viewed by 441
Abstract
This paper proposes a speech-based method for automatic depression classification. The system is based on ensemble learning for Convolutional Neural Networks (CNNs) and is evaluated using the data and the experimental protocol provided in the Depression Classification Sub-Challenge (DCC) at the 2016 Audio–Visual [...] Read more.
This paper proposes a speech-based method for automatic depression classification. The system is based on ensemble learning for Convolutional Neural Networks (CNNs) and is evaluated using the data and the experimental protocol provided in the Depression Classification Sub-Challenge (DCC) at the 2016 Audio–Visual Emotion Challenge (AVEC-2016). In the pre-processing phase, speech files are represented as a sequence of log-spectrograms and randomly sampled to balance positive and negative samples. For the classification task itself, first, a more suitable architecture for this task, based on One-Dimensional Convolutional Neural Networks, is built. Secondly, several of these CNN-based models are trained with different initializations and then the corresponding individual predictions are fused by using an Ensemble Averaging algorithm and combined per speaker to get an appropriate final decision. The proposed ensemble system achieves satisfactory results on the DCC at the AVEC-2016 in comparison with a reference system based on Support Vector Machines and hand-crafted features, with a CNN+LSTM-based system called DepAudionet, and with the case of a single CNN-based classifier. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Groupwise Non-Rigid Registration with Deep Learning: An Affordable Solution Applied to 2D Cardiac Cine MRI Reconstruction
Entropy 2020, 22(6), 687; https://doi.org/10.3390/e22060687 - 19 Jun 2020
Viewed by 460
Abstract
Groupwise image (GW) registration is customarily used for subsequent processing in medical imaging. However, it is computationally expensive due to repeated calculation of transformations and gradients. In this paper, we propose a deep learning (DL) architecture that achieves GW elastic registration of a [...] Read more.
Groupwise image (GW) registration is customarily used for subsequent processing in medical imaging. However, it is computationally expensive due to repeated calculation of transformations and gradients. In this paper, we propose a deep learning (DL) architecture that achieves GW elastic registration of a 2D dynamic sequence on an affordable average GPU. Our solution, referred to as dGW, is a simplified version of the well-known U-net. In our GW solution, the image that the other images are registered to, referred to in the paper as template image, is iteratively obtained together with the registered images. Design and evaluation have been carried out using 2D cine cardiac MR slices from 2 databases respectively consisting of 89 and 41 subjects. The first database was used for training and validation with 66.6–33.3% split. The second one was used for validation (50%) and testing (50%). Additional network hyperparameters, which are—in essence—those that control the transformation smoothness degree, are obtained by means of a forward selection procedure. Our results show a 9-fold runtime reduction with respect to an optimization-based implementation; in addition, making use of the well-known structural similarity (SSIM) index we have obtained significative differences with dGW with respect to an alternative DL solution based on Voxelmorph. Full article
Show Figures

Figure 1

Open AccessArticle
Exchange-Traded Funds on European Markets: Has Critical Mass been Reached? Implications for Financial Systems
Entropy 2020, 22(6), 686; https://doi.org/10.3390/e22060686 - 19 Jun 2020
Viewed by 386
Abstract
Exchange-traded funds (ETFs) are one of the most rapidly expanding categories of financial products in Europe. One of the key yet still unanswered questions is whether European ETF markets have reached the size at which they could affect the financial systems. In our [...] Read more.
Exchange-traded funds (ETFs) are one of the most rapidly expanding categories of financial products in Europe. One of the key yet still unanswered questions is whether European ETF markets have reached the size at which they could affect the financial systems. In our study, we examine 13 European countries during the period 2004–2017 in order to trace whether the share of ETFs in the total assets of investment funds has reached the ‘critical’ level that makes possible their further growth and can be associated with an influence on the financial system. We use a novel methodological approach that identifies the ‘critical mass’ along diffusion trajectories. Our results show that, in 10 countries, the share of ETFs in assets of investment funds increased. Still, in most countries, the share of ETFs did not exceed 1%. Estimates of the diffusion models indicate that the process of growing shares of ETFs was most dynamic and relatively most stable in Switzerland and United Kingdom. Results of the critical mass analysis imply that its achievement may be forecasted exclusively in these two cases. However, even in such cases there is no substantial evidence for a possible significant influence of ETFs on the local financial systems. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

Open AccessArticle
Multiscale Entropy Feature Extraction Method of Running Power Equipment Sound
Entropy 2020, 22(6), 685; https://doi.org/10.3390/e22060685 - 19 Jun 2020
Viewed by 365
Abstract
The equipment condition monitoring based on computer hearing is a new pattern recognition approach, and the system formed by it has the advantages of noncontact and strong early warning abilities. Extracting effective features from the sound data of the running power equipment help [...] Read more.
The equipment condition monitoring based on computer hearing is a new pattern recognition approach, and the system formed by it has the advantages of noncontact and strong early warning abilities. Extracting effective features from the sound data of the running power equipment help to improve the equipment monitoring accuracy. However, the sound of running equipment often has the characteristics of serious noise, non-linearity and instationary, which makes it difficult to extract features. To solve this problem, a feature extraction method based on the improved complementary ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and multiscale improved permutation entropy (MIPE) is proposed. Firstly, the ICEEMDAN is utilized to obtain a group of intrinsic mode functions (IMFs) from the sound of running power equipment. The noise IMFs are then identified and eliminated through mutual information (MI) and mean mutual information (meanMI) of IMFs. Next, the normalized mutual information (norMI) and MIPE are calculated respectively, and norMI is utilized to weigh the corresponding MIPE result. Finally, based on the separability criterion, the weighted MIPE results are feature-dimensionally reduced to obtain the multiscale entropy feature of the sound. The experimental results show that the classification accuracies of the method under the conditions of no noise and 5 dB reach 96.7% and 89.9%, respectively. In practice, the proposed method has higher reliability and stability for the sound feature extraction of the running power equipment. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Generic Entanglement Entropy for Quantum States with Symmetry
Entropy 2020, 22(6), 684; https://doi.org/10.3390/e22060684 - 19 Jun 2020
Viewed by 408
Abstract
When a quantum pure state is drawn uniformly at random from a Hilbert space, the state is typically highly entangled. This property of a random state is known as generic entanglement of quantum states and has been long investigated from many perspectives, ranging [...] Read more.
When a quantum pure state is drawn uniformly at random from a Hilbert space, the state is typically highly entangled. This property of a random state is known as generic entanglement of quantum states and has been long investigated from many perspectives, ranging from the black hole science to quantum information science. In this paper, we address the question of how symmetry of quantum states changes the properties of generic entanglement. More specifically, we study bipartite entanglement entropy of a quantum state that is drawn uniformly at random from an invariant subspace of a given symmetry. We first extend the well-known concentration formula to the one applicable to any subspace and then show that 1. quantum states in the subspaces associated with an axial symmetry are still highly entangled, though it is less than that of the quantum states without symmetry, 2. quantum states associated with the permutation symmetry are significantly less entangled, and 3. quantum states with translation symmetry are as entangled as the generic one. We also numerically investigate the phase-transition behavior of the distribution of generic entanglement, which indicates that the phase transition seems to still exist even when random states have symmetry. Full article
(This article belongs to the Special Issue Quantum Probability, Statistics and Control)
Show Figures

Figure 1

Open AccessArticle
Information Flow Analysis between EPU and Other Financial Time Series
Entropy 2020, 22(6), 683; https://doi.org/10.3390/e22060683 - 18 Jun 2020
Viewed by 459
Abstract
We investigate the strength and direction of information flow among economic policy uncertainty (EPU), US imports and exports to China, and the CNY/US exchange rate by using the novel concept of effective transfer entropy (ETE) with a sliding window methodology. We verify that [...] Read more.
We investigate the strength and direction of information flow among economic policy uncertainty (EPU), US imports and exports to China, and the CNY/US exchange rate by using the novel concept of effective transfer entropy (ETE) with a sliding window methodology. We verify that this new method can capture dynamic orders effectively by validating them with the linear transfer entropy (TE) and Granger causality methods. Analysis shows that since 2016, US economic policy has contributed substantially to China-US bilateral trade and that China is making passive adjustments based on this trade volume. Unlike trade market conditions, China’s economic policy has significantly influenced the exchange rate fluctuation since 2016, which has, in turn, affected US economic policy. Full article
(This article belongs to the Special Issue Processes with Memory in Natural and Social Sciences)
Show Figures

Figure 1

Open AccessArticle
Research on Extraction of Compound Fault Characteristics for Rolling Bearings in Wind Turbines
Entropy 2020, 22(6), 682; https://doi.org/10.3390/e22060682 - 18 Jun 2020
Viewed by 422
Abstract
Wind turbines work in strong background noise, and multiple faults often occur where features are mixed together and are easily misjudged. To extract composite fault of rolling bearings from wind turbines, a new hybrid approach was proposed based on multi-point optimal minimum entropy [...] Read more.
Wind turbines work in strong background noise, and multiple faults often occur where features are mixed together and are easily misjudged. To extract composite fault of rolling bearings from wind turbines, a new hybrid approach was proposed based on multi-point optimal minimum entropy deconvolution adjusted (MOMEDA) and the 1.5-dimensional Teager kurtosis spectrum. The composite fault signal was deconvoluted using the MOMEDA method. The deconvoluted signal was analyzed by applying the 1.5-dimensional Teager kurtosis spectrum. Finally, the frequency characteristics were extracted for the bearing fault. A bearing composite fault signal with strong background noise was utilized to prove the validity of the method. Two actual cases on bearing fault detection were analyzed with wind turbines. The results show that the method is suitable for the diagnosis of wind turbine compound faults and can be applied to research on the health behavior of wind turbines. Full article
Show Figures

Figure 1

Open AccessReview
Evolutionary Processes in Quantum Decision Theory
Entropy 2020, 22(6), 681; https://doi.org/10.3390/e22060681 - 18 Jun 2020
Viewed by 409
Abstract
The review presents the basics of quantum decision theory, with an emphasis on temporary processes in decision making. The aim is to explain the principal points of the theory. How an operationally-testable, rational choice between alternatives differs from a choice decorated by irrational [...] Read more.
The review presents the basics of quantum decision theory, with an emphasis on temporary processes in decision making. The aim is to explain the principal points of the theory. How an operationally-testable, rational choice between alternatives differs from a choice decorated by irrational feelings is elucidated. Quantum-classical correspondence is emphasized. A model of quantum intelligence network is described. Dynamic inconsistencies are shown to be resolved in the frame of the quantum decision theory. Full article
(This article belongs to the Special Issue Quantum Models of Cognition and Decision-Making)
Open AccessFeature PaperArticle
The Role of Gravity in the Evolution of the Concentration Field in the Electrochemical Membrane Cell
Entropy 2020, 22(6), 680; https://doi.org/10.3390/e22060680 - 18 Jun 2020
Viewed by 391
Abstract
The subject of the study was the osmotic volume transport of aqueous CuSO4 and/or ethanol solutions through a selective cellulose acetate membrane (Nephrophan). The effect of concentration of solution components, concentration polarization of solutions and configuration of the membrane system on the [...] Read more.
The subject of the study was the osmotic volume transport of aqueous CuSO4 and/or ethanol solutions through a selective cellulose acetate membrane (Nephrophan). The effect of concentration of solution components, concentration polarization of solutions and configuration of the membrane system on the value of the volume osmotic flux ( J v i r ) in a single-membrane system in which the polymer membrane located in the horizontal plane was examined. The investigations were carried out under mechanical stirring conditions of the solutions and after it was turned off. Based on the obtained measurement results J v i r , the effects of concentration polarization, convection polarization, asymmetry and amplification of the volume osmotic flux and the thickness of the concentration boundary layers were calculated. Osmotic entropy production was also calculated for solution homogeneity and concentration polarization conditions. Using the thickness of the concentration boundary layers, critical values of the Rayleigh concentration number ( R C r ), i.e., the switch, were estimated between two states: convective (with higher J v i r ) and non-convective (with lower J v i r ). The operation of this switch indicates the regulatory role of earthly gravity in relation to membrane transport. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop