Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 4 (April 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) What are the conceptual foundations of thermodynamics? Mathematicians have explored this question [...] Read more.
View options order results:
result details:
Displaying articles 1-96
Export citation of selected articles as:
Open AccessEditorial Information Decomposition of Target Effects from Multi-Source Interactions: Perspectives on Previous, Current and Future Work
Entropy 2018, 20(4), 307; https://doi.org/10.3390/e20040307
Received: 19 April 2018 / Revised: 19 April 2018 / Accepted: 19 April 2018 / Published: 23 April 2018
PDF Full-text (386 KB) | HTML Full-text | XML Full-text
Abstract
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source
[...] Read more.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field. Full article
Figures

Graphical abstract

Open AccessArticle Balancing Non-Equilibrium Driving with Nucleotide Selectivity at Kinetic Checkpoints in Polymerase Fidelity Control
Entropy 2018, 20(4), 306; https://doi.org/10.3390/e20040306
Received: 27 February 2018 / Revised: 17 April 2018 / Accepted: 21 April 2018 / Published: 23 April 2018
PDF Full-text (20711 KB) | HTML Full-text | XML Full-text
Abstract
High fidelity gene transcription and replication require kinetic discrimination of nucleotide substrate species by RNA and DNA polymerases under chemical non-equilibrium conditions. It is known that sufficiently large free energy driving force is needed for each polymerization or elongation cycle to maintain far-from-equilibrium
[...] Read more.
High fidelity gene transcription and replication require kinetic discrimination of nucleotide substrate species by RNA and DNA polymerases under chemical non-equilibrium conditions. It is known that sufficiently large free energy driving force is needed for each polymerization or elongation cycle to maintain far-from-equilibrium to achieve low error rates. Considering that each cycle consists of multiple kinetic steps with different transition rates, one expects that the kinetic modulations by polymerases are not evenly conducted at each step. We show that accelerations at different kinetic steps impact quite differently to the overall elongation characteristics. In particular, for forward transitions that discriminate cognate and non-cognate nucleotide species to serve as kinetic selection checkpoints, the transition cannot be accelerated too quickly nor retained too slowly to obtain low error rates, as balancing is needed between the nucleotide selectivity and the non-equilibrium driving. Such a balance is not the same as the speed-accuracy tradeoff in which high accuracy is always obtained at sacrifice of speed. For illustration purposes, we used three-state and five-state models of nucleotide addition in the polymerase elongation and show how the non-equilibrium steady state characteristics change upon variations on stepwise forward or backward kinetics. Notably, by using the multi-step elongation schemes and parameters from T7 RNA polymerase transcription elongation, we demonstrate that individual transitions serving as selection checkpoints need to proceed at moderate rates in order to sustain the necessary non-equilibrium drives as well as to allow nucleotide selections for an optimal error control. We also illustrate why rate-limiting conformational transitions of the enzyme likely play a significant role in the error reduction. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
Entropy 2018, 20(4), 305; https://doi.org/10.3390/e20040305
Received: 22 January 2018 / Revised: 5 April 2018 / Accepted: 17 April 2018 / Published: 23 April 2018
PDF Full-text (574 KB) | HTML Full-text | XML Full-text
Abstract
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution
[...] Read more.
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy. Full article
Figures

Figure 1

Open AccessArticle The Power Law Characteristics of Stock Price Jump Intervals: An Empirical and Computational Experimental Study
Entropy 2018, 20(4), 304; https://doi.org/10.3390/e20040304
Received: 22 March 2018 / Revised: 17 April 2018 / Accepted: 18 April 2018 / Published: 21 April 2018
PDF Full-text (6106 KB) | HTML Full-text | XML Full-text
Abstract
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in
[...] Read more.
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in which an agent’s investment strategies, risk appetite, learning ability, adaptability, and dynamic changes are considered to create a dynamically changing environment. An analysis of these data packets from the ASM simulation indicates that, with the learning mechanism, the ASM reflects the kurtosis, fat-tailed distribution characteristics commonly observed in real markets. Data packets obtained from simulating the ASM for 5010 periods are incorporated into a regression analysis. Analysis results indicate that the JDMPL effectively characterizes the stock price jumps in the market. The results also support the hypothesis that the time interval of stock price jumps is consistent with the power law and indicate that the diversity and dynamic changes of agents’ investment strategies are the reasons for the discontinuity in the changes of stock prices. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle The Conservation of Average Entropy Production Rate in a Model of Signal Transduction: Information Thermodynamics Based on the Fluctuation Theorem
Entropy 2018, 20(4), 303; https://doi.org/10.3390/e20040303
Received: 17 March 2018 / Revised: 18 April 2018 / Accepted: 19 April 2018 / Published: 21 April 2018
Cited by 1 | PDF Full-text (871 KB) | HTML Full-text | XML Full-text
Abstract
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined
[...] Read more.
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined using information theory. By applying the fluctuation theorem for reversible signal steps, the transition probability was described using the average entropy production rate. Specifically, when the signal event number during the cascade was maximized, the average entropy production rate was found to be conserved during the entire cascade. This approach provides a quantitative means of analyzing signal transduction and identifies an effective cascade for a signaling network. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Location-Aware Incentive Mechanism for Traffic Offloading in Heterogeneous Networks: A Stackelberg Game Approach
Entropy 2018, 20(4), 302; https://doi.org/10.3390/e20040302
Received: 28 February 2018 / Revised: 1 April 2018 / Accepted: 4 April 2018 / Published: 20 April 2018
PDF Full-text (1230 KB) | HTML Full-text | XML Full-text
Abstract
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading.
[...] Read more.
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading. A location-aware incentive mechanism is therefore designed to incentivize small cells to serve macrocell users. Instead of taking the amount of resources shared as the basis of the reward division, the performance promotion brought to the macro network is taken. Meanwhile, in order to ensure the superiority of small cell users, the significance of them weighs heavier than macrocell users instead of being treated equally. The offloading problem is formulated as a Stackelberg game where the macro cell base station is the leader and small cells are followers. The Stackelberg equilibrium of the game is proved to be existing and unique. It is also proved to be the optimum of the proposed problem. Simulation and numerical results verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Extended Thermodynamics of Rarefied Polyatomic Gases: 15-Field Theory Incorporating Relaxation Processes of Molecular Rotation and Vibration
Entropy 2018, 20(4), 301; https://doi.org/10.3390/e20040301
Received: 3 April 2018 / Revised: 17 April 2018 / Accepted: 17 April 2018 / Published: 20 April 2018
PDF Full-text (402 KB) | HTML Full-text | XML Full-text
Abstract
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The
[...] Read more.
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The theory is justified, at mesoscopic level, by a generalized Boltzmann equation in which the distribution function depends on two internal variables that take into account the energy exchange among the different molecular modes of a gas, that is, translational, rotational, and vibrational modes. By adopting the generalized Bhatnagar, Gross and Krook (BGK)-type collision term, we derive explicitly the closed system of field equations with the use of the Maximum Entropy Principle (MEP). The NSF theory is derived from the RET theory as a limiting case of small relaxation times via the Maxwellian iteration. The relaxation times introduced in the theory are shown to be related to the shear and bulk viscosities and heat conductivity. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Open AccessArticle Information-Length Scaling in a Generalized One-Dimensional Lloyd’s Model
Entropy 2018, 20(4), 300; https://doi.org/10.3390/e20040300
Received: 27 December 2017 / Revised: 29 March 2018 / Accepted: 8 April 2018 / Published: 20 April 2018
PDF Full-text (372 KB) | HTML Full-text | XML Full-text
Abstract
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P(ϵ)1/ϵ1+α with α
[...] Read more.
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P ( ϵ ) 1 / ϵ 1 + α with α ( 0 , 2 ] ; where ϵ are the on-site random energies. Our model serves as a generalization of 1D Lloyd’s model, which corresponds to α = 1 . In particular, we demonstrate that the information length β of the eigenfunctions follows the scaling law β = γ x / ( 1 + γ x ) , with x = ξ / L and γ γ ( α ) . Here, ξ is the eigenfunction localization length (that we extract from the scaling of Landauer’s conductance) and L is the wire length. We also report that for α = 2 the properties of the 1D Anderson model are effectively reproduced. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Quantum Nonlocality and Quantum Correlations in the Stern–Gerlach Experiment
Entropy 2018, 20(4), 299; https://doi.org/10.3390/e20040299
Received: 27 February 2018 / Revised: 11 April 2018 / Accepted: 12 April 2018 / Published: 19 April 2018
PDF Full-text (953 KB) | HTML Full-text | XML Full-text
Abstract
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or
[...] Read more.
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or comprehended in the current literature. Hence, the main aim of this paper is to demonstrate that the SGE possesses a quantum nonlocal character that has not previously been visualized or presented before. Accordingly, to show the nonlocality into the SGE, we calculate the quantum correlations C ( z , θ ) by redefining the Banaszek–Wódkiewicz correlation in terms of the Wigner operator, that is C ( z , θ ) = Ψ | W ^ ( z , p z ) σ ^ ( θ ) | Ψ , where W ^ ( z , p z ) is the Wigner operator, σ ^ ( θ ) is the Pauli spin operator in an arbitrary direction θ and | Ψ is the quantum state given by an entangled state of the external degree of freedom and the eigenstates of the spin. We show that this correlation function for the SGE violates the Clauser–Horne–Shimony–Holt Bell inequality. Thus, this feature of the SGE might be interesting for both the teaching of quantum mechanics and to investigate the phenomenon of quantum nonlocality. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessFeature PaperArticle Calculation of Configurational Entropy in Complex Landscapes
Entropy 2018, 20(4), 298; https://doi.org/10.3390/e20040298
Received: 22 December 2017 / Revised: 4 April 2018 / Accepted: 11 April 2018 / Published: 19 April 2018
Cited by 1 | PDF Full-text (2493 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal
[...] Read more.
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal to the logarithm of the number of ways a lattice with a given dimensionality and number of classes can be arranged to produce the same total amount of edge between cells of different classes. However, that work seemed to also suggest that the feasibility of applying this method to real landscapes was limited due to intractably large numbers of possible arrangements of raster cells in large landscapes. Here I extend that work by showing that: (1) the proportion of arrangements rather than the number with a given amount of edge length provides a means to calculate unbiased relative configurational entropy, obviating the need to compute all possible configurations of a landscape lattice; (2) the edge lengths of randomized landscape mosaics are normally distributed, following the central limit theorem; and (3) given this normal distribution it is possible to fit parametric probability density functions to estimate the expected proportion of randomized configurations that have any given edge length, enabling the calculation of configurational entropy on any landscape regardless of size or number of classes. I evaluate the boundary limits (4) for this normal approximation for small landscapes with a small proportion of a minority class and show it holds under all realistic landscape conditions. I further (5) demonstrate that this relationship holds for a sample of real landscapes that vary in size, patch richness, and evenness of area in each cover type, and (6) I show that the mean and standard deviation of the normally distributed edge lengths can be predicted nearly perfectly as a function of the size, patch richness and diversity of a landscape. Finally, (7) I show that the configurational entropy of a landscape is highly related to the dimensionality of the landscape, the number of cover classes, the evenness of landscape composition across classes, and landscape heterogeneity. These advances provide a means for researchers to directly estimate the frequency distribution of all possible macrostates of any observed landscape, and then directly calculate the relative configurational entropy of the observed macrostate, and to understand the ecological meaning of different amounts of configurational entropy. These advances enable scientists to take configurational entropy from a concept to an applied tool to measure and compare the disorder of real landscapes with an objective and unbiased measure based on entropy and the second law. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Figures

Figure 1

Open AccessArticle Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297
Received: 10 July 2017 / Revised: 6 April 2018 / Accepted: 10 April 2018 / Published: 18 April 2018
Cited by 1 | PDF Full-text (529 KB) | HTML Full-text | XML Full-text
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The
[...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Figures

Figure 1

Open AccessArticle Image Clustering with Optimization Algorithms and Color Space
Entropy 2018, 20(4), 296; https://doi.org/10.3390/e20040296
Received: 19 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
PDF Full-text (30472 KB) | HTML Full-text | XML Full-text
Abstract
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the
[...] Read more.
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the number of thresholds. However, the determination of multiple thresholds is a typical issue. Moreover, the conventional thresholding algorithms could not be used in color image segmentation. In this study, a new color image clustering algorithm with multilevel thresholding has been presented and, it has been shown how to use the multilevel thresholding techniques for color image clustering. Thus, initially, threshold selection techniques such as the Otsu and Kapur methods were employed for each color channel separately. The objective functions of both approaches have been integrated with the forest optimization algorithm (FOA) and particle swarm optimization (PSO) algorithm. In the next stage, thresholds determined by optimization algorithms were used to divide color space into small cubes or prisms. Each sub-cube or prism created in the color space was evaluated as a cluster. As the volume of prisms affects the homogeneity of the clusters created, multiple thresholds were employed to reduce the sizes of the sub-cubes. The performance of the proposed method was tested with different images. It was observed that the results obtained were more efficient than conventional methods. Full article
Figures

Figure 1

Open AccessArticle A Novel Algorithm to Improve Digital Chaotic Sequence Complexity through CCEMD and PE
Entropy 2018, 20(4), 295; https://doi.org/10.3390/e20040295
Received: 18 March 2018 / Revised: 10 April 2018 / Accepted: 12 April 2018 / Published: 18 April 2018
PDF Full-text (4904 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further,
[...] Read more.
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further, we designed a new algorithm based on complementary ensemble empirical mode decomposition (CEEMD) and permutation entropy (PE) that can effectively enhance digital chaotic sequence complexity. In addition, an image encryption experiment was performed with post-processing of the chaotic binary sequences by the new algorithm. The experimental results show good performance of the chaotic binary sequence. Full article
Figures

Figure 1a

Open AccessFeature PaperArticle A Lenient Causal Arrow of Time?
Entropy 2018, 20(4), 294; https://doi.org/10.3390/e20040294
Received: 29 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
PDF Full-text (616 KB) | HTML Full-text | XML Full-text
Abstract
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark
[...] Read more.
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark contrast with the many different possible locality conditions which have been studied and elaborated upon. In the present work, some retrocausal toy-models which reproduce the predictions of quantum mechanics for Bell-type correlations are reviewed. It is pointed out that a certain toy-model which is ostensibly superdeterministic—based on denying the free-variable status of some of quantum mechanics’ input parameters—actually contains within it a complete retrocausal toy-model. Occam’s razor thus indicates that the superdeterministic point of view is superfluous. A challenge is to generalize the retrocausal toy-models to a full theory—a reformulation of quantum mechanics—in which the standard causal arrow of time would be replaced by a more lenient one: an arrow of time applicable only to macroscopically-available information. In discussing such a reformulation, one finds that many of the perplexing features of quantum mechanics could arise naturally, especially in the context of stochastic theories. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Entropy Production on the Gravity-Driven Flow with Free Surface Down an Inclined Plane Subjected to Constant Temperature
Entropy 2018, 20(4), 293; https://doi.org/10.3390/e20040293
Received: 22 March 2018 / Revised: 13 April 2018 / Accepted: 16 April 2018 / Published: 17 April 2018
PDF Full-text (3077 KB) | HTML Full-text | XML Full-text
Abstract
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of
[...] Read more.
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of a falling film have an important role in the entropy production. When the layer shows an unstable evolution, the entropy production by fluid friction is much larger than that of the film with a stable flat interface. As the heat transfers actively from the free surface to the ambient air, the temperature gradient inside flowing films becomes large and the entropy generation by heat transfer increases. The contribution of fluid friction on the volumetric averaged entropy production is larger than that of heat transfer at moderate and high viscous dissipation parameters. Full article
(This article belongs to the Special Issue Entropy Production in Turbulent Flow)
Figures

Figure 1

Back to Top