Next Issue
Volume 18, April
Previous Issue
Volume 18, February

Entropy, Volume 18, Issue 3 (March 2016) – 38 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
System Entropy Measurement of Stochastic Partial Differential Systems
Entropy 2016, 18(3), 99; https://doi.org/10.3390/e18030099 - 18 Mar 2016
Cited by 3 | Viewed by 2343
Abstract
System entropy describes the dispersal of a system’s energy and is an indication of the disorder of a physical system. Several system entropy measurement methods have been developed for dynamic systems. However, most real physical systems are always modeled using stochastic partial differential [...] Read more.
System entropy describes the dispersal of a system’s energy and is an indication of the disorder of a physical system. Several system entropy measurement methods have been developed for dynamic systems. However, most real physical systems are always modeled using stochastic partial differential dynamic equations in the spatio-temporal domain. No efficient method currently exists that can calculate the system entropy of stochastic partial differential systems (SPDSs) in consideration of the effects of intrinsic random fluctuation and compartment diffusion. In this study, a novel indirect measurement method is proposed for calculating of system entropy of SPDSs using a Hamilton–Jacobi integral inequality (HJII)-constrained optimization method. In other words, we solve a nonlinear HJII-constrained optimization problem for measuring the system entropy of nonlinear stochastic partial differential systems (NSPDSs). To simplify the system entropy measurement of NSPDSs, the global linearization technique and finite difference scheme were employed to approximate the nonlinear stochastic spatial state space system. This allows the nonlinear HJII-constrained optimization problem for the system entropy measurement to be transformed to an equivalent linear matrix inequalities (LMIs)-constrained optimization problem, which can be easily solved using the MATLAB LMI-toolbox (MATLAB R2014a, version 8.3). Finally, several examples are presented to illustrate the system entropy measurement of SPDSs. Full article
Show Figures

Figure 1

Article
Assessment of Nociceptive Responsiveness Levels during Sedation-Analgesia by Entropy Analysis of EEG
Entropy 2016, 18(3), 103; https://doi.org/10.3390/e18030103 - 18 Mar 2016
Cited by 6 | Viewed by 2374
Abstract
The level of sedation in patients undergoing medical procedures is decided to assure unconsciousness and prevent pain. The monitors of depth of anesthesia, based on the analysis of the electroencephalogram (EEG), have been progressively introduced into the daily practice to provide additional information [...] Read more.
The level of sedation in patients undergoing medical procedures is decided to assure unconsciousness and prevent pain. The monitors of depth of anesthesia, based on the analysis of the electroencephalogram (EEG), have been progressively introduced into the daily practice to provide additional information about the state of the patient. However, the quantification of analgesia still remains an open problem. The purpose of this work was to analyze the capability of prediction of nociceptive responses based on refined multiscale entropy (RMSE) and auto mutual information function (AMIF) applied to EEG signals recorded in 378 patients scheduled to undergo ultrasonographic endoscopy under sedation-analgesia. Two observed categorical responses after the application of painful stimulation were analyzed: the evaluation of the Ramsay Sedation Scale (RSS) after nail bed compression and the presence of gag reflex (GAG) during endoscopy tube insertion. In addition, bispectrum (BIS), heart rate (HR), predicted concentrations of propofol (CeProp) and remifentanil (CeRemi) were annotated with a resolution of 1 s. Results showed that functions based on RMSE, AMIF, HR and CeRemi permitted predicting different stimulation responses during sedation better than BIS. Full article
Show Figures

Figure 1

Article
A Complexity-Based Approach for the Detection of Weak Signals in Ocean Ambient Noise
Entropy 2016, 18(3), 101; https://doi.org/10.3390/e18030101 - 18 Mar 2016
Cited by 28 | Viewed by 3076
Abstract
There are numerous studies showing that there is a constant increase in the ocean ambient noise level and the ever-growing demand for developing algorithms for detecting weak signals in ambient noise. In this study, we utilize dynamical and statistical complexity to detect the [...] Read more.
There are numerous studies showing that there is a constant increase in the ocean ambient noise level and the ever-growing demand for developing algorithms for detecting weak signals in ambient noise. In this study, we utilize dynamical and statistical complexity to detect the presence of weak ship noise embedded in ambient noise. The ambient noise and ship noise were recorded in the South China Sea. The multiscale entropy (MSE) method and the complexity-entropy causality plane (C-H plane) were used to quantify the dynamical and statistical complexity of the measured time series, respectively. We generated signals with varying signal-to-noise ratio (SNR) by varying the amplification of a ship signal. The simulation results indicate that the complexity is sensitive to change in the information in the ambient noise and the change in SNR, a finding that enables the detection of weak ship signals in strong background ambient noise. The simulation results also illustrate that complexity is better than the traditional spectrogram method, particularly effective for detecting low SNR signals in ambient noise. In addition, complexity-based MSE and C-H plane methods are simple, robust and do not assume any underlying dynamics in time series. Hence, complexity should be used in practical situations. Full article
(This article belongs to the Special Issue Computational Complexity)
Show Figures

Figure 1

Article
Development of a Refractory High Entropy Superalloy
Entropy 2016, 18(3), 102; https://doi.org/10.3390/e18030102 - 17 Mar 2016
Cited by 91 | Viewed by 6171
Abstract
Microstructure, phase composition and mechanical properties of a refractory high entropy superalloy, AlMo0.5NbTa0.5TiZr, are reported in this work. The alloy consists of a nano-scale mixture of two phases produced by the decomposition from a high temperature body-centered cubic (BCC) [...] Read more.
Microstructure, phase composition and mechanical properties of a refractory high entropy superalloy, AlMo0.5NbTa0.5TiZr, are reported in this work. The alloy consists of a nano-scale mixture of two phases produced by the decomposition from a high temperature body-centered cubic (BCC) phase. The first phase is present in the form of cuboidal-shaped nano-precipitates aligned in rows along <100>-type directions, has a disordered BCC crystal structure with the lattice parameter a1 = 326.9 ± 0.5 pm and is rich in Mo, Nb and Ta. The second phase is present in the form of channels between the cuboidal nano-precipitates, has an ordered B2 crystal structure with the lattice parameter a2 = 330.4 ± 0.5 pm and is rich in Al, Ti and Zr. Both phases are coherent and have the same crystallographic orientation within the former grains. The formation of this modulated nano-phase structure is discussed in the framework of nucleation-and-growth and spinodal decomposition mechanisms. The yield strength of this refractory high entropy superalloy is superior to the yield strength of Ni-based superalloys in the temperature range of 20 °C to 1200 °C. Full article
(This article belongs to the Special Issue High-Entropy Alloys and High-Entropy-Related Materials)
Show Figures

Figure 1

Article
Riemannian Laplace Distribution on the Space of Symmetric Positive Definite Matrices
Entropy 2016, 18(3), 98; https://doi.org/10.3390/e18030098 - 16 Mar 2016
Cited by 5 | Viewed by 2214
Abstract
The Riemannian geometry of the space Pm, of m × m symmetric positive definite matrices, has provided effective tools to the fields of medical imaging, computer vision and radar signal processing. Still, an open challenge remains, which consists of extending these [...] Read more.
The Riemannian geometry of the space Pm, of m × m symmetric positive definite matrices, has provided effective tools to the fields of medical imaging, computer vision and radar signal processing. Still, an open challenge remains, which consists of extending these tools to correctly handle the presence of outliers (or abnormal data), arising from excessive noise or faulty measurements. The present paper tackles this challenge by introducing new probability distributions, called Riemannian Laplace distributions on the space Pm. First, it shows that these distributions provide a statistical foundation for the concept of the Riemannian median, which offers improved robustness in dealing with outliers (in comparison to the more popular concept of the Riemannian center of mass). Second, it describes an original expectation-maximization algorithm, for estimating mixtures of Riemannian Laplace distributions. This algorithm is applied to the problem of texture classification, in computer vision, which is considered in the presence of outliers. It is shown to give significantly better performance with respect to other recently-proposed approaches. Full article
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics)
Show Figures

Figure 1

Article
Constrained Inference When the Sampled and Target Populations Differ
Entropy 2016, 18(3), 97; https://doi.org/10.3390/e18030097 - 16 Mar 2016
Viewed by 1428
Abstract
In the analysis of contingency tables, often one faces two difficult criteria: sampled and target populations are not identical and prior information translates to the presence of general linear inequality restrictions. Under these situations, we present new models of estimating cell probabilities related [...] Read more.
In the analysis of contingency tables, often one faces two difficult criteria: sampled and target populations are not identical and prior information translates to the presence of general linear inequality restrictions. Under these situations, we present new models of estimating cell probabilities related to four well-known methods of estimation. We prove that each model yields maximum likelihood estimators under those restrictions. The performance ranking of these methods under equality restrictions is known. We compare these methods under inequality restrictions in a simulation study. It reveals that these methods may rank differently under inequality restriction than with equality. These four methods are also compared while US census data are analyzed. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Preference Inconsistence-Based Entropy
Entropy 2016, 18(3), 96; https://doi.org/10.3390/e18030096 - 15 Mar 2016
Cited by 1 | Viewed by 1881
Abstract
Preference analysis is a class of important issues in ordinal decision making. As available information is usually obtained from different evaluation criteria or experts, the derived preference decisions may be inconsistent and uncertain. Shannon entropy is a suitable measurement of uncertainty. This work [...] Read more.
Preference analysis is a class of important issues in ordinal decision making. As available information is usually obtained from different evaluation criteria or experts, the derived preference decisions may be inconsistent and uncertain. Shannon entropy is a suitable measurement of uncertainty. This work proposes the concepts of preference inconsistence set and preference inconsistence degree. Then preference inconsistence entropy is introduced by combining preference inconsistence degree and Shannon entropy. A number of properties and theorems as well as two applications are discussed. Feature selection is used for attribute reduction and sample condensation aims to obtain a consistent preference system. Forward feature selection algorithm, backward feature selection algorithm and sample condensation algorithm are developed. The experimental results show that the proposed model represents an effective solution for preference analysis. Full article
Show Figures

Figure 1

Article
A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds
Entropy 2016, 18(3), 95; https://doi.org/10.3390/e18030095 - 15 Mar 2016
Cited by 5 | Viewed by 1892
Abstract
Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs). Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several [...] Read more.
Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs). Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment. Full article
Show Figures

Figure 1

Review
Dark Energy: The Shadowy Reflection of Dark Matter?
Entropy 2016, 18(3), 94; https://doi.org/10.3390/e18030094 - 12 Mar 2016
Cited by 16 | Viewed by 3178
Abstract
In this article, we review a series of recent theoretical results regarding a conventional approach to the dark energy (DE) concept. This approach is distinguished among others for its simplicity and its physical relevance. By compromising General Relativity (GR) and Thermodynamics at cosmological [...] Read more.
In this article, we review a series of recent theoretical results regarding a conventional approach to the dark energy (DE) concept. This approach is distinguished among others for its simplicity and its physical relevance. By compromising General Relativity (GR) and Thermodynamics at cosmological scale, we end up with a model without DE. Instead, the Universe we are proposing is filled with a perfect fluid of self-interacting dark matter (DM), the volume elements of which perform hydrodynamic flows. To the best of our knowledge, it is the first time in a cosmological framework that the energy of the cosmic fluid internal motions is also taken into account as a source of the universal gravitational field. As we demonstrate, this form of energy may compensate for the DE needed to compromise spatial flatness, while, depending on the particular type of thermodynamic processes occurring in the interior of the DM fluid (isothermal or polytropic), the Universe depicts itself as either decelerating or accelerating (respectively). In both cases, there is no disagreement between observations and the theoretical prediction of the distant supernovae (SNe) Type Ia distribution. In fact, the cosmological model with matter content in the form of a thermodynamically-involved DM fluid not only interprets the observational data associated with the recent history of Universe expansion, but also confronts successfully with every major cosmological issue (such as the age and the coincidence problems). In this way, depending on the type of thermodynamic processes in it, such a model may serve either for a conventional DE cosmology or for a viable alternative one. Full article
(This article belongs to the Special Issue Selected Papers from 13th Joint European Thermodynamics Conference)
Show Figures

Figure 1

Article
Selected Remarks about Computer Processing in Terms of Flow Control and Statistical Mechanics
Entropy 2016, 18(3), 93; https://doi.org/10.3390/e18030093 - 12 Mar 2016
Cited by 3 | Viewed by 2438
Abstract
Despite the fact that much has been said about processing in computer science, it seems that there is still much to do. A classical approach assumes that the computations done by computers are a kind of mathematical operation (calculations of functions values) and [...] Read more.
Despite the fact that much has been said about processing in computer science, it seems that there is still much to do. A classical approach assumes that the computations done by computers are a kind of mathematical operation (calculations of functions values) and have no special relations to energy transformation and flow. However, there is a possibility to get a new view on selected topics, and as a special case, the sorting problem is presented; we know many different sorting algorithms, including those that have complexity equal to O(n lg(n)) , which means that this problem is algorithmically closed, but it is also possible to focus on the problem of sorting in terms of flow control, entropy and statistical mechanics. This is done in relation to the existing definitions of sorting, connections between sorting and ordering and some important aspects of computer processing understood as a flow that are not taken into account in many theoretical considerations in computer science. The proposed new view is an attempt to change the paradigm in the description of algorithms’ performance by computational complexity and processing, taking into account the existing references between the idea of Turing machines and their physical implementations. This proposal can be expressed as a physics of computer processing; a reference point to further analysis of algorithmic and interactive processing in computer systems. Full article
(This article belongs to the Special Issue Computational Complexity)
Show Figures

Figure 1

Article
Long-Range Electron Transport Donor-Acceptor in Nonlinear Lattices
Entropy 2016, 18(3), 92; https://doi.org/10.3390/e18030092 - 11 Mar 2016
Cited by 4 | Viewed by 2234
Abstract
We study here several simple models of the electron transfer (ET) in a one-dimensional nonlinear lattice between a donor and an acceptor and propose a new fast mechanism of electron surfing on soliton-like excitations along the lattice. The nonlinear lattice is modeled as [...] Read more.
We study here several simple models of the electron transfer (ET) in a one-dimensional nonlinear lattice between a donor and an acceptor and propose a new fast mechanism of electron surfing on soliton-like excitations along the lattice. The nonlinear lattice is modeled as a classical one-dimensional Morse chain and the dynamics of the electrons are considered in the tight-binding approximation. This model is applied to the processes along a covalent bridge connecting donors and acceptors. First, it is shown that the electron forms bound states with the solitonic excitations in the lattice. These so-called solectrons may move with supersonic speed. In a heated system, the electron transfer between a donor and an acceptor is modeled as a diffusion-like process. We study in detail the role of thermal factors on the electron transfer. Then, we develop a simple model based on the classical Smoluchowski–Chandrasekhar picture of diffusion-controlled reactions as stochastic processes with emitters and absorbers. Acceptors are modeled by an absorbing boundary. Finally, we compare the new ET mechanisms described here with known ET data. We conclude that electron surfing on solitons could be a special fast way for ET over quite long distances. Full article
(This article belongs to the Special Issue Non-Linear Lattice)
Show Figures

Figure 1

Article
Analysis of Entropy Generation in the Flow of Peristaltic Nanofluids in Channels With Compliant Walls
Entropy 2016, 18(3), 90; https://doi.org/10.3390/e18030090 - 11 Mar 2016
Cited by 63 | Viewed by 2666
Abstract
Entropy generation during peristaltic flow of nanofluids in a non-uniform two dimensional channel with compliant walls has been studied. The mathematical modelling of the governing flow problem is obtained under the approximation of long wavelength and zero Reynolds number (creeping flow regime). The [...] Read more.
Entropy generation during peristaltic flow of nanofluids in a non-uniform two dimensional channel with compliant walls has been studied. The mathematical modelling of the governing flow problem is obtained under the approximation of long wavelength and zero Reynolds number (creeping flow regime). The resulting non-linear partial differential equations are solved with the help of a perturbation method. The analytic and numerical results of different parameters are demonstrated mathematically and graphically. The present analysis provides a theoretical model to estimate the characteristics of several Newtonian and non-Newtonian fluid flows, such as peristaltic transport of blood. Full article
(This article belongs to the Special Issue Entropy in Nanofluids)
Show Figures

Figure 1

Article
A Novel Weak Fuzzy Solution for Fuzzy Linear System
Entropy 2016, 18(3), 68; https://doi.org/10.3390/e18030068 - 11 Mar 2016
Cited by 7 | Viewed by 2041
Abstract
This article proposes a novel weak fuzzy solution for the fuzzy linear system. As a matter of fact, we define the right-hand side column of the fuzzy linear system as a piecewise fuzzy function to overcome the related shortcoming, which exists in the [...] Read more.
This article proposes a novel weak fuzzy solution for the fuzzy linear system. As a matter of fact, we define the right-hand side column of the fuzzy linear system as a piecewise fuzzy function to overcome the related shortcoming, which exists in the previous findings. The strong point of this proposal is that the weak fuzzy solution is always a fuzzy number vector. Two complex and non-complex linear systems under uncertainty are tested to validate the effectiveness and correctness of the presented method. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Show Figures

Figure 1

Article
Two Universality Properties Associated with the Monkey Model of Zipf’s Law
Entropy 2016, 18(3), 89; https://doi.org/10.3390/e18030089 - 09 Mar 2016
Cited by 3 | Viewed by 2245
Abstract
The distribution of word probabilities in the monkey model of Zipf’s law is associated with two universality properties: (1) the exponent in the approximate power law approaches −1 as the alphabet size increases and the letter probabilities are specified as the spacings from [...] Read more.
The distribution of word probabilities in the monkey model of Zipf’s law is associated with two universality properties: (1) the exponent in the approximate power law approaches −1 as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on [0,1] ; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem from Shao and Hahn for the logarithm of sample spacings constructed on [0,1] and the second property follows from Anscombe’s central limit theorem for a random number of independent and identically distributed (i.i.d.) random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Maximizing Diversity in Biology and Beyond
Entropy 2016, 18(3), 88; https://doi.org/10.3390/e18030088 - 09 Mar 2016
Cited by 10 | Viewed by 3541
Abstract
Entropy, under a variety of names, has long been used as a measure of diversity in ecology, as well as in genetics, economics and other fields. There is a spectrum of viewpoints on diversity, indexed by a real parameter q giving greater or [...] Read more.
Entropy, under a variety of names, has long been used as a measure of diversity in ecology, as well as in genetics, economics and other fields. There is a spectrum of viewpoints on diversity, indexed by a real parameter q giving greater or lesser importance to rare species. Leinster and Cobbold (2012) proposed a one-parameter family of diversity measures taking into account both this variation and the varying similarities between species. Because of this latter feature, diversity is not maximized by the uniform distribution on species. So it is natural to ask: which distributions maximize diversity, and what is its maximum value? In principle, both answers depend on q, but our main theorem is that neither does. Thus, there is a single distribution that maximizes diversity from all viewpoints simultaneously, and any list of species has an unambiguous maximum diversity value. Furthermore, the maximizing distribution(s) can be computed in finite time, and any distribution maximizing diversity from some particular viewpoint q > 0 actually maximizes diversity for all q. Although we phrase our results in ecological terms, they apply very widely, with applications in graph theory and metric geometry. Full article
(This article belongs to the Special Issue Information and Entropy in Biological Systems)
Show Figures

Figure 1

Article
Selecting Video Key Frames Based on Relative Entropy and the Extreme Studentized Deviate Test
Entropy 2016, 18(3), 73; https://doi.org/10.3390/e18030073 - 09 Mar 2016
Cited by 9 | Viewed by 2749
Abstract
This paper studies the relative entropy and its square root as distance measures of neighboring video frames for video key frame extraction. We develop a novel approach handling both common and wavelet video sequences, in which the extreme Studentized deviate test is exploited [...] Read more.
This paper studies the relative entropy and its square root as distance measures of neighboring video frames for video key frame extraction. We develop a novel approach handling both common and wavelet video sequences, in which the extreme Studentized deviate test is exploited to identify shot boundaries for segmenting a video sequence into shots. Then, video shots can be divided into different sub-shots, according to whether the video content change is large or not, and key frames are extracted from sub-shots. The proposed technique is general, effective and efficient to deal with video sequences of any kind. Our new approach can offer optional additional multiscale summarizations of video data, achieving a balance between having more details and maintaining less redundancy. Extensive experimental results show that the new scheme obtains very encouraging results in video key frame extraction, in terms of both objective evaluation metrics and subjective visual perception. Full article
Show Figures

Figure 1

Article
Entropy Production in the Theory of Heat Conduction in Solids
Entropy 2016, 18(3), 87; https://doi.org/10.3390/e18030087 - 08 Mar 2016
Cited by 10 | Viewed by 2277
Abstract
The evolution of the entropy production in solids due to heat transfer is usually associated with the Prigogine’s minimum entropy production principle. In this paper, we propose a critical review of the results of Prigogine and some comments on the succeeding literature. We [...] Read more.
The evolution of the entropy production in solids due to heat transfer is usually associated with the Prigogine’s minimum entropy production principle. In this paper, we propose a critical review of the results of Prigogine and some comments on the succeeding literature. We suggest a characterization of the evolution of the entropy production of the system through the generalized Fourier modes, showing that they are the only states with a time independent entropy production. The variational approach and a Lyapunov functional of the temperature, monotonically decreasing with time, are discussed. We describe the analytic properties of the entropy production as a function of time in terms of the generalized Fourier coefficients of the system. Analytical tools are used throughout the paper and numerical examples will support the statements. Full article
Show Figures

Figure 1

Article
Exergy and Thermoeconomic Analysis for an Underground Train Station Air-Conditioning Cooling System
Entropy 2016, 18(3), 86; https://doi.org/10.3390/e18030086 - 07 Mar 2016
Cited by 1 | Viewed by 3443
Abstract
The necessity of air-conditioning causes the enormous energy use of underground train stations. Exergy and thermoeconomic analysis is applied to the annual operation of the air-conditioning system of a large underground train station in Taiwan. The current installation and the monitored data are [...] Read more.
The necessity of air-conditioning causes the enormous energy use of underground train stations. Exergy and thermoeconomic analysis is applied to the annual operation of the air-conditioning system of a large underground train station in Taiwan. The current installation and the monitored data are taken to be the base case, which is then compared to three different optimized designs. The total revenue requirement levelized cost rate and the total exergy destruction rate are used to evaluate the merits. The results show that the cost optimization objective would obtain a lower total revenue requirement levelized cost rate, but at the expense of a higher total exergy destruction rate. Optimization of thermodynamic efficiency, however, leads to a lower total exergy destruction rate, but would increase the total revenue requirement levelized cost rate significantly. It has been shown that multi-objective optimization would result in a small marginal increase in total revenue requirement levelized cost rate, but achieve a significantly lower total exergy destruction rate. Results in terms of the normalized total revenue requirement levelized cost rate and the normalized total exergy destruction rate are also presented. It has been shown by second law analysis when applied to underground train stations that lower annual energy use and lower CO2 emissions can be achieved. Full article
(This article belongs to the Special Issue Entropy and the Economy)
Show Figures

Figure 1

Article
iDoRNA: An Interacting Domain-based Tool for Designing RNA-RNA Interaction Systems
Entropy 2016, 18(3), 83; https://doi.org/10.3390/e18030083 - 07 Mar 2016
Viewed by 2285
Abstract
RNA-RNA interactions play a crucial role in gene regulation in living organisms. They have gained increasing interest in the field of synthetic biology because of their potential applications in medicine and biotechnology. However, few novel regulators based on RNA-RNA interactions with desired structures [...] Read more.
RNA-RNA interactions play a crucial role in gene regulation in living organisms. They have gained increasing interest in the field of synthetic biology because of their potential applications in medicine and biotechnology. However, few novel regulators based on RNA-RNA interactions with desired structures and functions have been developed due to the challenges of developing design tools. Recently, we proposed a novel tool, called iDoDe, for designing RNA-RNA interacting sequences by first decomposing RNA structures into interacting domains and then designing each domain using a stochastic algorithm. However, iDoDe did not provide an optimal solution because it still lacks a mechanism to optimize the design. In this work, we have further developed the tool by incorporating a genetic algorithm (GA) to find an RNA solution with maximized structural similarity and minimized hybridized RNA energy, and renamed the tool iDoRNA. A set of suitable parameters for the genetic algorithm were determined and found to be a weighting factor of 0.7, a crossover rate of 0.9, a mutation rate of 0.1, and the number of individuals per population set to 8. We demonstrated the performance of iDoRNA in comparison with iDoDe by using six RNA-RNA interaction models. It was found that iDoRNA could efficiently generate all models of interacting RNAs with far more accuracy and required far less computational time than iDoDe. Moreover, we compared the design performance of our tool against existing design tools using forty-four RNA-RNA interaction models. The results showed that the performance of iDoRNA is better than RiboMaker when considering the ensemble defect, the fitness score and computation time usage. However, it appears that iDoRNA is outperformed by NUPACK and RNAiFold 2.0 when considering the ensemble defect. Nevertheless, iDoRNA can still be an useful alternative tool for designing novel RNA-RNA interactions in synthetic biology research. The source code of iDoRNA can be downloaded from the site http://synbio.sbi.kmutt.ac.th. Full article
(This article belongs to the Special Issue Entropy and RNA Structure, Folding and Mechanics)
Show Figures

Graphical abstract

Article
Assessing the Robustness of Thermoeconomic Diagnosis of Fouled Evaporators: Sensitivity Analysis of the Exergetic Performance of Direct Expansion Coils
Entropy 2016, 18(3), 85; https://doi.org/10.3390/e18030085 - 05 Mar 2016
Cited by 10 | Viewed by 2911
Abstract
Thermoeconomic diagnosis of refrigeration systems is a pioneering approach to the diagnosis of malfunctions, which has been recently proven to achieve good performances for the detection of specific faults. Being an exergy-based diagnostic technique, its performance is influenced by the trends of exergy [...] Read more.
Thermoeconomic diagnosis of refrigeration systems is a pioneering approach to the diagnosis of malfunctions, which has been recently proven to achieve good performances for the detection of specific faults. Being an exergy-based diagnostic technique, its performance is influenced by the trends of exergy functions in the “design” and “abnormal” conditions. In this paper the sensitivity of performance of thermoeconomic diagnosis in detecting a fouled direct expansion coil and quantifying the additional consumption it induces is investigated; this fault is critical due to the simultaneous air cooling and dehumidification occurring in the coil, that induce variations in both the chemical and thermal fractions of air exergy. The examined parameters are the temperature and humidity of inlet air, the humidity of reference state and the sensible/latent heat ratio (varied by considering different coil depths). The exergy analysis reveals that due to the more intense dehumidification occurring in presence of fouling, the exergy efficiency of the evaporator coil eventually increases. Once the diagnostic technique is based only on the thermal fraction of air exergy, the results suggest that the performance of the technique increases when inlet air has a lower absolute humidity, as evident from the “optimal performance” regions identified on a psychrometric chart. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Show Figures

Graphical abstract

Article
Entropy and Fractal Antennas
Entropy 2016, 18(3), 84; https://doi.org/10.3390/e18030084 - 04 Mar 2016
Cited by 124 | Viewed by 5883
Abstract
The entropies of Shannon, Rényi and Kolmogorov are analyzed and compared together with their main properties. The entropy of some particular antennas with a pre-fractal shape, also called fractal antennas, is studied. In particular, their entropy is linked with the fractal geometrical shape [...] Read more.
The entropies of Shannon, Rényi and Kolmogorov are analyzed and compared together with their main properties. The entropy of some particular antennas with a pre-fractal shape, also called fractal antennas, is studied. In particular, their entropy is linked with the fractal geometrical shape and the physical performance. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory I)
Show Figures

Figure 1

Article
Hierarchical Decomposition Thermodynamic Approach for the Study of Solar Absorption Refrigerator Performance
Entropy 2016, 18(3), 82; https://doi.org/10.3390/e18030082 - 04 Mar 2016
Cited by 1 | Viewed by 2382
Abstract
A thermodynamic approach based on the hierarchical decomposition which is usually used in mechanical structure engineering is proposed. The methodology is applied to an absorption refrigeration cycle. Thus, a thermodynamic analysis of the performances on solar absorption refrigerators is presented. Under the hypothesis [...] Read more.
A thermodynamic approach based on the hierarchical decomposition which is usually used in mechanical structure engineering is proposed. The methodology is applied to an absorption refrigeration cycle. Thus, a thermodynamic analysis of the performances on solar absorption refrigerators is presented. Under the hypothesis of an endoreversible model, the effects of the generator, the solar concentrator and the solar converter temperatures, on the coefficient of performance (COP), are presented and discussed. In fact, the coefficient of performance variations, according to the ratio of the heat transfer areas of the high temperature part (the thermal engine 2) Ah and the heat transfer areas of the low temperature part (the thermal receptor) Ar variations, are studied in this paper. For low values of the heat-transfer areas of the high temperature part and relatively important values of heat-transfer areas of the low temperature part as for example Ah equal to 30% of Ar, the coefficient of performance is relatively important (approximately equal to 65%). For an equal-area distribution corresponding to an area ratio Ah/Ar of 50%, the COP is approximately equal to 35%. The originality of this deduction is that it allows a conceptual study of the solar absorption cycle. Full article
(This article belongs to the Special Issue Entropy Generation in Thermal Systems and Processes 2015)
Show Figures

Figure 1

Article
Phase Transitions in Equilibrium and Non-Equilibrium Models on Some Topologies
Entropy 2016, 18(3), 81; https://doi.org/10.3390/e18030081 - 03 Mar 2016
Cited by 1 | Viewed by 2156
Abstract
On some regular and non-regular topologies, we studied the critical properties of models that present up-down symmetry, like the equilibrium Ising model and the nonequilibrium majority vote model. These are investigated on networks, like Apollonian (AN), Barabási–Albert (BA), small-worlds (SW), Voronoi–Delaunay (VD) and [...] Read more.
On some regular and non-regular topologies, we studied the critical properties of models that present up-down symmetry, like the equilibrium Ising model and the nonequilibrium majority vote model. These are investigated on networks, like Apollonian (AN), Barabási–Albert (BA), small-worlds (SW), Voronoi–Delaunay (VD) and Erdös–Rényi (ER) random graphs. The review here is on phase transitions, critical points, exponents and universality classes that are compared to the results obtained for these models on regular square lattices (SL). Full article
Show Figures

Figure 1

Article
Minimal Length, Measurability and Gravity
Entropy 2016, 18(3), 80; https://doi.org/10.3390/e18030080 - 02 Mar 2016
Cited by 8 | Viewed by 2070
Abstract
The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities) notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the [...] Read more.
The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities) notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
Article
Entanglement Entropy in a Triangular Billiard
Entropy 2016, 18(3), 79; https://doi.org/10.3390/e18030079 - 01 Mar 2016
Cited by 3 | Viewed by 2255
Abstract
The Schrödinger equation for a quantum particle in a two-dimensional triangular billiard can be written as the Helmholtz equation with a Dirichlet boundary condition. We numerically explore the quantum entanglement of the eigenfunctions of the triangle billiard and its relation to the irrationality [...] Read more.
The Schrödinger equation for a quantum particle in a two-dimensional triangular billiard can be written as the Helmholtz equation with a Dirichlet boundary condition. We numerically explore the quantum entanglement of the eigenfunctions of the triangle billiard and its relation to the irrationality of the triangular geometry. We also study the entanglement dynamics of the coherent state with its center chosen at the centroid of the different triangle configuration. Using the von Neumann entropy of entanglement, we quantify the quantum entanglement appearing in the eigenfunction of the triangular domain. We see a clear correspondence between the irrationality of the triangle and the average entanglement of the eigenfunctions. The entanglement dynamics of the coherent state shows a dependence on the geometry of the triangle. The effect of quantum squeezing on the coherent state is analyzed and it can be utilize to enhance or decrease the entanglement entropy in a triangular billiard. Full article
(This article belongs to the Special Issue Entanglement Entropy)
Show Figures

Figure 1

Article
Wavelet Entropy-Based Traction Inverter Open Switch Fault Diagnosis in High-Speed Railways
Entropy 2016, 18(3), 78; https://doi.org/10.3390/e18030078 - 01 Mar 2016
Cited by 15 | Viewed by 2891
Abstract
In this paper, a diagnosis plan is proposed to settle the detection and isolation problem of open switch faults in high-speed railway traction system traction inverters. Five entropy forms are discussed and compared with the traditional fault detection methods, namely, discrete wavelet transform [...] Read more.
In this paper, a diagnosis plan is proposed to settle the detection and isolation problem of open switch faults in high-speed railway traction system traction inverters. Five entropy forms are discussed and compared with the traditional fault detection methods, namely, discrete wavelet transform and discrete wavelet packet transform. The traditional fault detection methods cannot efficiently detect the open switch faults in traction inverters because of the low resolution or the sudden change of the current. The performances of Wavelet Packet Energy Shannon Entropy (WPESE), Wavelet Packet Energy Tsallis Entropy (WPETE) with different non-extensive parameters, Wavelet Packet Energy Shannon Entropy with a specific sub-band (WPESE3,6), Empirical Mode Decomposition Shannon Entropy (EMDESE), and Empirical Mode Decomposition Tsallis Entropy (EMDETE) with non-extensive parameters in detecting the open switch fault are evaluated by the evaluation parameter. Comparison experiments are carried out to select the best entropy form for the traction inverter open switch fault detection. In addition, the DC component is adopted to isolate the failure Isolated Gate Bipolar Transistor (IGBT). The simulation experiments show that the proposed plan can diagnose single and simultaneous open switch faults correctly and timely. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory I)
Show Figures

Figure 1

Article
Tea Category Identification Using a Novel Fractional Fourier Entropy and Jaya Algorithm
Entropy 2016, 18(3), 77; https://doi.org/10.3390/e18030077 - 27 Feb 2016
Cited by 79 | Viewed by 4045
Abstract
This work proposes a tea-category identification (TCI) system, which can automatically determine tea category from images captured by a 3 charge-coupled device (CCD) digital camera. Three-hundred tea images were acquired as the dataset. Apart from the 64 traditional color histogram features that were [...] Read more.
This work proposes a tea-category identification (TCI) system, which can automatically determine tea category from images captured by a 3 charge-coupled device (CCD) digital camera. Three-hundred tea images were acquired as the dataset. Apart from the 64 traditional color histogram features that were extracted, we also introduced a relatively new feature as fractional Fourier entropy (FRFE) and extracted 25 FRFE features from each tea image. Furthermore, the kernel principal component analysis (KPCA) was harnessed to reduce 64 + 25 = 89 features. The four reduced features were fed into a feedforward neural network (FNN). Its optimal weights were obtained by Jaya algorithm. The 10 × 10-fold stratified cross-validation (SCV) showed that our TCI system obtains an overall average sensitivity rate of 97.9%, which was higher than seven existing approaches. In addition, we used only four features less than or equal to state-of-the-art approaches. Our proposed system is efficient in terms of tea-category identification. Full article
(This article belongs to the Special Issue Computational Complexity)
Show Figures

Figure 1

Article
Multi-Agent System Supporting Automated Large-Scale Photometric Computations
Entropy 2016, 18(3), 76; https://doi.org/10.3390/e18030076 - 27 Feb 2016
Cited by 6 | Viewed by 1907
Abstract
The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms [...] Read more.
The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts). To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
The Impact of Entropy Production and Emission Mitigation on Economic Growth
Entropy 2016, 18(3), 75; https://doi.org/10.3390/e18030075 - 27 Feb 2016
Cited by 5 | Viewed by 2162
Abstract
Entropy production in industrial economies involves heat currents, driven by gradients of temperature, and particle currents, driven by specific external forces and gradients of temperature and chemical potentials. Pollution functions are constructed for the associated emissions. They reduce the output elasticities of the [...] Read more.
Entropy production in industrial economies involves heat currents, driven by gradients of temperature, and particle currents, driven by specific external forces and gradients of temperature and chemical potentials. Pollution functions are constructed for the associated emissions. They reduce the output elasticities of the production factors capital, labor, and energy in the growth equation of the capital-labor-energy-creativity model, when the emissions approach their critical limits. These are drawn by, e.g., health hazards or threats to ecological and climate stability. By definition, the limits oblige the economic actors to dedicate shares of the available production factors to emission mitigation, or to adjustments to the emission-induced changes in the biosphere. Since these shares are missing for the production of the quantity of goods and services that would be available to consumers and investors without emission mitigation, the “conventional” output of the economy shrinks. The resulting losses of conventional output are estimated for two classes of scenarios: (1) energy conservation; and (2) nuclear exit and subsidies to photovoltaics. The data of the scenarios refer to Germany in the 1980s and after 11 March 2011. For the energy-conservation scenarios, a method of computing the reduction of output elasticities by emission abatement is proposed. Full article
(This article belongs to the Special Issue Entropy and the Economy)
Show Figures

Figure 1

Article
Self-Replicating Spots in the Brusselator Model and Extreme Events in the One-Dimensional Case with Delay
Entropy 2016, 18(3), 64; https://doi.org/10.3390/e18030064 - 27 Feb 2016
Cited by 13 | Viewed by 2381
Abstract
We consider the paradigmatic Brusselator model for the study of dissipative structures in far from equilibrium systems. In two dimensions, we show the occurrence of a self-replication phenomenon leading to the fragmentation of a single localized spot into four daughter spots. This instability [...] Read more.
We consider the paradigmatic Brusselator model for the study of dissipative structures in far from equilibrium systems. In two dimensions, we show the occurrence of a self-replication phenomenon leading to the fragmentation of a single localized spot into four daughter spots. This instability affects the new spots and leads to splitting behavior until the system reaches a hexagonal stationary pattern. This phenomenon occurs in the absence of delay feedback. In addition, we incorporate a time-delayed feedback loop in the Brusselator model. In one dimension, we show that the delay feedback induces extreme events in a chemical reaction diffusion system. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical distribution, which is often considered as a signature of the presence of rogue waves, appears for sufficiently strong feedback intensity. The generality of our analysis suggests that the feedback-induced instability leading to the spontaneous formation of rogue waves in a controllable way is a universal phenomenon. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop