Next Issue
Volume 20, November
Previous Issue
Volume 20, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 20, Issue 10 (October 2018) – 89 articles

Cover Story (view full-size image): By studying the stochastic complexity of spin models with interactions of an arbitrary order, we find that the simplicity of a model is not determined by the order of the interactions, but rather by their mutual arrangements. As in physics, simple models allow predictions that are easy to falsify. On the contrary, models that are often used in statistical learning, appear to be highly complex, because of their extended set of interactions, and difficult to falsify. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 1566 KiB  
Article
Estimation of Activity Interaction Parameters in Fe-S-j Systems
by Tianhua Ju 1, Xueyong Ding 1,*, Yingyi Zhang 2,*, Weiliang Chen 1, Xiangkui Cheng 3, Bo Wang 1, Jingxin Dai 1 and Xinlin Yan 4,*
1 School of Metallurgy, Northeastern University, Shenyang 110004, China
2 School of Metallurgical Engineering, Anhui University of Technology, Maanshan 243002, China
3 Institute of International Vanadium Titanium, Panzhihua University, Panzhihua 617000, China
4 Institute of Solid State Physics, Vienna University of Technology, Wiedner Hauptstr. 8-10, 1040 Vienna, Austria
Entropy 2018, 20(10), 808; https://doi.org/10.3390/e20100808 - 22 Oct 2018
Cited by 27 | Viewed by 3792
Abstract
It is important to know the activity interaction parameters between components in melts in the process of metallurgy. However, it’s considerably difficult to measure them experimentally, relying still to a large extent on theoretical calculations. In this paper, the first-order activity interaction parameter [...] Read more.
It is important to know the activity interaction parameters between components in melts in the process of metallurgy. However, it’s considerably difficult to measure them experimentally, relying still to a large extent on theoretical calculations. In this paper, the first-order activity interaction parameter ( e s j ) of j on sulphur in Fe-based melts at 1873 K is investigated by a calculation model established by combining the Miedema model and Toop-Hillert geometric model as well as considering excess entropy and mixing enthalpy. We consider two strategies, with or without using excess entropy in the calculations. Our results show that: (1) the predicted values are in good agreement with those recommended by Japan Society for Promotion of Science (JSPS); and (2) the agreement is even better when excess entropy is considered in the calculations. In addition, the deviations of our theoretical results from experimental values | e S ( exp ) j e S ( cal ) j | depend on the element j’s locations in the periodic table. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

29 pages, 1450 KiB  
Article
The Role of Data in Model Building and Prediction: A Survey Through Examples
by Marco Baldovin 1, Fabio Cecconi 2, Massimo Cencini 2, Andrea Puglisi 3 and Angelo Vulpiani 1,4,*
1 Dipartimento di Fisica, “Sapienza” Università di Roma, p.le A. Moro 2, 00185 Roma, Italy
2 Istituto dei Sistemi Complessi, CNR, via dei Taurini 19, 00185 Rome, Italy
3 CNR-ISC and Dipartimento di Fisica, Sapienza Università di Roma, p.le A. Moro 2, 00185 Roma, Italy
4 Centro Linceo Interdisciplinare “B. Segre”, Accademia dei Lincei, via della Lungara 10, 00165 Rome, Italy
Entropy 2018, 20(10), 807; https://doi.org/10.3390/e20100807 - 22 Oct 2018
Cited by 19 | Viewed by 3963
Abstract
The goal of Science is to understand phenomena and systems in order to predict their development and gain control over them. In the scientific process of knowledge elaboration, a crucial role is played by models which, in the language of quantitative sciences, mean [...] Read more.
The goal of Science is to understand phenomena and systems in order to predict their development and gain control over them. In the scientific process of knowledge elaboration, a crucial role is played by models which, in the language of quantitative sciences, mean abstract mathematical or algorithmical representations. This short review discusses a few key examples from Physics, taken from dynamical systems theory, biophysics, and statistical mechanics, representing three paradigmatic procedures to build models and predictions from available data. In the case of dynamical systems we show how predictions can be obtained in a virtually model-free framework using the methods of analogues, and we briefly discuss other approaches based on machine learning methods. In cases where the complexity of systems is challenging, like in biophysics, we stress the necessity to include part of the empirical knowledge in the models to gain the minimal amount of realism. Finally, we consider many body systems where many (temporal or spatial) scales are at play—and show how to derive from data a dimensional reduction in terms of a Langevin dynamics for their slow components. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

12 pages, 1307 KiB  
Article
Distributed Joint Source-Channel Coding Using Quasi-Uniform Systematic Polar Codes
by Liqiang Jin and Hongwen Yang *
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Entropy 2018, 20(10), 806; https://doi.org/10.3390/e20100806 - 22 Oct 2018
Cited by 17 | Viewed by 2877
Abstract
This paper proposes a distributed joint source-channel coding (DJSCC) scheme using polar-like codes. In the proposed scheme, each distributed source encodes source message with a quasi-uniform systematic polar code (QSPC) or a punctured QSPC, and only transmits parity bits over its independent channel. [...] Read more.
This paper proposes a distributed joint source-channel coding (DJSCC) scheme using polar-like codes. In the proposed scheme, each distributed source encodes source message with a quasi-uniform systematic polar code (QSPC) or a punctured QSPC, and only transmits parity bits over its independent channel. These systematic codes play the role of both source compression and error protection. For the infinite code-length, we show that the proposed scheme approaches the information-theoretical limit by the technique of joint source-channel polarization with side information. For the finite code-length, the simulation results verify that the proposed scheme outperforms the distributed separate source-channel coding (DSSCC) scheme using polar codes and the DJSCC scheme using classic systematic polar codes. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Show Figures

Figure 1

21 pages, 1046 KiB  
Article
Stock Net Entropy: Evidence from the Chinese Growth Enterprise Market
by Qiuna Lv 1, Liyan Han 1, Yipeng Wan 2 and Libo Yin 3,*
1 School of Economics and Management, Beihang University, Beijing 100083, China
2 Math Club Center, Acalanes High School, Lafayette, CA 94549, USA
3 School of Finance, Central University of Finance and Economics, Beijing 100081, China
Entropy 2018, 20(10), 805; https://doi.org/10.3390/e20100805 - 19 Oct 2018
Cited by 12 | Viewed by 3107
Abstract
By introducing net entropy into a stock network, this paper focuses on investigating the impact of network entropy on market returns and trading in the Chinese Growth Enterprise Market (GEM). In this paper, indices of Wu structure entropy (WSE) and SD structure entropy [...] Read more.
By introducing net entropy into a stock network, this paper focuses on investigating the impact of network entropy on market returns and trading in the Chinese Growth Enterprise Market (GEM). In this paper, indices of Wu structure entropy (WSE) and SD structure entropy (SDSE) are considered as indicators of network heterogeneity to present market diversification. A series of dynamic financial networks consisting of 1066 daily nets is constructed by applying the dynamic conditional correlation multivariate GARCH (DCC-MV-GARCH) model with a threshold adjustment. Then, we evaluate the quantitative relationships between network entropy indices and market trading-variables and their bilateral information spillover effects by applying the bivariate EGARCH model. There are two main findings in the paper. Firstly, the evidence significantly ensures that both market returns and trading volumes associate negatively with the network entropy indices, which indicates that stock heterogeneity, which is negative with the value of network entropy indices by definition, can help to improve market returns and increase market trading volumes. Secondly, results show significant information transmission between the indicators of network entropy and stock market trading variables. Full article
Show Figures

Figure 1

11 pages, 263 KiB  
Review
Group Entropies: From Phase Space Geometry to Entropy Functionals via Group Theory
by Henrik Jeldtoft Jensen 1,2,* and Piergiulio Tempesta 3,4
1 Centre for Complexity Science and Department of Mathematics, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
2 Institute of Innovative Research, Tokyo Institute of Technology, 4259, Nagatsuta-cho, Yokohama 226-8502, Japan
3 Departamento de Física Teórica, Universidad Complutense de Madrid, 28040 Madrid, Spain
4 Instituto de Ciencias Matemáticas (ICMAT), 28049 Madrid, Spain
Entropy 2018, 20(10), 804; https://doi.org/10.3390/e20100804 - 19 Oct 2018
Cited by 17 | Viewed by 3219
Abstract
The entropy of Boltzmann-Gibbs, as proved by Shannon and Khinchin, is based on four axioms, where the fourth one concerns additivity. The group theoretic entropies make use of formal group theory to replace this axiom with a more general composability axiom. As has [...] Read more.
The entropy of Boltzmann-Gibbs, as proved by Shannon and Khinchin, is based on four axioms, where the fourth one concerns additivity. The group theoretic entropies make use of formal group theory to replace this axiom with a more general composability axiom. As has been pointed out before, generalised entropies crucially depend on the number of allowed degrees of freedom N. The functional form of group entropies is restricted (though not uniquely determined) by assuming extensivity on the equal probability ensemble, which leads to classes of functionals corresponding to sub-exponential, exponential or super-exponential dependence of the phase space volume W on N. We review the ensuing entropies, discuss the composability axiom and explain why group entropies may be particularly relevant from an information-theoretical perspective. Full article
(This article belongs to the Special Issue Nonadditive Entropies and Complex Systems)
14 pages, 2308 KiB  
Article
The Middle-Income Trap and the Coping Strategies From Network-Based Perspectives
by Ming-Yang Zhou, Wen-Man Xiong, Xiao-Yu Li and Hao Liao *
Guangdong Province Key Laboratory of Popular High Performance Computers, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
Entropy 2018, 20(10), 803; https://doi.org/10.3390/e20100803 - 18 Oct 2018
Cited by 5 | Viewed by 3218
Abstract
When a developing country reaches a relatively average income level, it often stops growing further and its income does not improve. This is known as the middle-income trap. How to overcome this trap is a longstanding problem for developing countries, and has been [...] Read more.
When a developing country reaches a relatively average income level, it often stops growing further and its income does not improve. This is known as the middle-income trap. How to overcome this trap is a longstanding problem for developing countries, and has been studied in various research fields. In this work, we use the Fitness-Complexity method (FCM) to analyze the common characteristics of the countries that successfully get through the middle-income trap, and show the origin of the middle-income trap based on the international trade network. In the analysis, a novel method is proposed to characterize the interdependency between products. The results show that some middle-complexity products depend much on each other, which indicates that developing countries should focus on them simultaneously, implying high difficulty to escape the middle-income trap. To tackle the middle-income trap, developing countries should learn experiences from developed countries that share similar development history. we then design an effective method to evaluate the similarity between countries and recommend developed countries to a certain developing country. The effectiveness of our method is validated in the international trade network. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

19 pages, 713 KiB  
Article
Macroscopic Entropy of Non-Equilibrium Systems and Postulates of Extended Thermodynamics: Application to Transport Phenomena and Chemical Reactions in Nanoparticles
by Sergey I. Serdyukov 1,2
1 Department of Chemistry, Moscow State University, 119992 Moscow, Russia
2 Topchiev Institute of Petrochemical Synthesis, Russian Academy of Sciences, 119991 Moscow, Russia
Entropy 2018, 20(10), 802; https://doi.org/10.3390/e20100802 - 18 Oct 2018
Cited by 11 | Viewed by 3338
Abstract
In this work, we consider extended irreversible thermodynamics in assuming that the entropy density is a function of both common thermodynamic variables and their higher-order time derivatives. An expression for entropy production, and the linear phenomenological equations describing diffusion and chemical reactions, are [...] Read more.
In this work, we consider extended irreversible thermodynamics in assuming that the entropy density is a function of both common thermodynamic variables and their higher-order time derivatives. An expression for entropy production, and the linear phenomenological equations describing diffusion and chemical reactions, are found in the context of this approach. Solutions of the sets of linear equations with respect to fluxes and their higher-order time derivatives allow the coefficients of diffusion and reaction rate constants to be established as functions of size of the nanosystems in which these reactions occur. The Maxwell-Cattaneo and Jeffreys constitutive equations, as well as the higher-order constitutive equations, which describe the processes in reaction-diffusion systems, are obtained. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Show Figures

Figure 1

21 pages, 3705 KiB  
Article
Encryption Algorithm of Multiple-Image Using Mixed Image Elements and Two Dimensional Chaotic Economic Map
by A. A. Karawia 1,2
1 Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
2 Computer Science Unit, Deanship of Educational Services, Qassim University, P.O. Box 6595, Buraidah 51452, Saudi Arabia
Entropy 2018, 20(10), 801; https://doi.org/10.3390/e20100801 - 18 Oct 2018
Cited by 32 | Viewed by 4179
Abstract
To enhance the encryption proficiency and encourage the protected transmission of multiple images, the current work introduces an encryption algorithm for multiple images using the combination of mixed image elements (MIES) and a two-dimensional economic map. Firstly, the original images are grouped into [...] Read more.
To enhance the encryption proficiency and encourage the protected transmission of multiple images, the current work introduces an encryption algorithm for multiple images using the combination of mixed image elements (MIES) and a two-dimensional economic map. Firstly, the original images are grouped into one big image that is split into many pure image elements (PIES); secondly, the logistic map is used to shuffle the PIES; thirdly, it is confused with the sequence produced by the two-dimensional economic map to get MIES; finally, the MIES are gathered into a big encrypted image that is split into many images of the same size as the original images. The proposed algorithm includes a huge number key size space, and this makes the algorithm secure against hackers. Even more, the encryption results obtained by the proposed algorithm outperform existing algorithms in the literature. A comparison between the proposed algorithm and similar algorithms is made. The analysis of the experimental results and the proposed algorithm shows that the proposed algorithm is efficient and secure. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

15 pages, 11623 KiB  
Article
Heat Transfer and Flow Structures of Laminar Confined Slot Impingement Jet with Power-Law Non-Newtonian Fluid
by Yan Qiang 1,2, Liejiang Wei 1,2,*, Xiaomei Luo 3, Hongchao Jian 1,2, Wenan Wang 1,2 and Fenfen Li 1,2
1 College of Energy and Power Engineering, Lanzhou University of Technology, Lanzhou 730050, China
2 Key Laboratory of Fluid Machinery and System, Lanzhou 730050, China
3 China North Vehicle Research Institute, Beijing 100071, China
Entropy 2018, 20(10), 800; https://doi.org/10.3390/e20100800 - 18 Oct 2018
Cited by 10 | Viewed by 3227
Abstract
Heat transfer performances and flow structures of laminar impinging slot jets with power-law non-Newtonian fluids and corresponding typical industrial fluids (Carboxyl Methyl Cellulose (CMC) solutions and Xanthangum (XG) solutions) have been studied in this work. Investigations are performed for Reynolds number Re less [...] Read more.
Heat transfer performances and flow structures of laminar impinging slot jets with power-law non-Newtonian fluids and corresponding typical industrial fluids (Carboxyl Methyl Cellulose (CMC) solutions and Xanthangum (XG) solutions) have been studied in this work. Investigations are performed for Reynolds number Re less than 200, power-law index n ranging from 0.5 to 1.5 and consistency index K varying from 0.001 to 0.5 to explore heat transfer and flow structure of shear-thinning fluid and shear-thickening fluid. Results indicate that with the increase of n, K for a given Re, wall Nusselt number increases mainly attributing to the increase of inlet velocity U. For a given inlet velocity, wall Nusselt number decreases with the increase of n and K, which mainly attributes to the increase of apparent viscosity and the reduction of momentum diffusion. For the same Re, U and Pr, wall Nusselt number decreases with the increase of n. Among the study of industrial power-law shear-thinning fluid, CMC solution with 100 ppm shows the best heat transfer performance at a given velocity. Moreover, new correlation of Nusselt number about industrial fluid is proposed. In general, for the heat transfer of laminar confined impinging jet, it is best to use the working fluid with low viscosity. Full article
Show Figures

Figure 1

12 pages, 2946 KiB  
Article
Long-Term Independence of Solar Wind Polytropic Index on Plasma Flow Speed
by George Livadiotis
Division of Space Science and Engineering, Southwest Research Institute, San Antonio, TX 78238, USA
Entropy 2018, 20(10), 799; https://doi.org/10.3390/e20100799 - 17 Oct 2018
Cited by 32 | Viewed by 3596
Abstract
The paper derives the polytropic indices over the last two solar cycles (years 1995–2017) for the solar wind proton plasma near Earth (~1 AU). We use ~92-s datasets of proton plasma moments (speed, density, and temperature), measured from the Solar Wind Experiment instrument [...] Read more.
The paper derives the polytropic indices over the last two solar cycles (years 1995–2017) for the solar wind proton plasma near Earth (~1 AU). We use ~92-s datasets of proton plasma moments (speed, density, and temperature), measured from the Solar Wind Experiment instrument onboard Wind spacecraft, to estimate the moving averages of the polytropic index, as well as their weighted means and standard errors as a function of the solar wind speed and the year of measurements. The derived long-term behavior of the polytropic index agrees with the results of other previous methods. In particular, we find that the polytropic index remains quasi-constant with respect to the plasma flow speed, in agreement with earlier analyses of solar wind plasma. It is shown that most of the fluctuations of the polytropic index appear in the fast solar wind. The polytropic index remains quasi-constant, despite the frequent entropic variations. Therefore, on an annual basis, the polytropic index of the solar wind proton plasma near ~1 AU can be considered independent of the plasma flow speed. The estimated all-year weighted mean and its standard error is γ = 1.86 ± 0.09. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

18 pages, 269 KiB  
Article
Algorithmic Entropy and Landauer’s Principle Link Microscopic System Behaviour to the Thermodynamic Entropy
by Sean Devine
School of Management, Victoria University of Wellington, P.O. Box 600, Wellington 6140, New Zealand
Entropy 2018, 20(10), 798; https://doi.org/10.3390/e20100798 - 17 Oct 2018
Cited by 88 | Viewed by 3016
Abstract
Algorithmic information theory in conjunction with Landauer’s principle can quantify the cost of maintaining a reversible real-world computational system distant from equilibrium. As computational bits are conserved in an isolated reversible system, bit flows can be used to track the way a highly [...] Read more.
Algorithmic information theory in conjunction with Landauer’s principle can quantify the cost of maintaining a reversible real-world computational system distant from equilibrium. As computational bits are conserved in an isolated reversible system, bit flows can be used to track the way a highly improbable configuration trends toward a highly probable equilibrium configuration. In an isolated reversible system, all microstates within a thermodynamic macrostate have the same algorithmic entropy. However, from a thermodynamic perspective, when these bits primarily specify stored energy states, corresponding to a fluctuation from the most probable set of states, they represent “potential entropy”. However, these bits become “realised entropy” when, under the second law of thermodynamics, they become bits specifying the momentum degrees of freedom. The distance of a fluctuation from equilibrium is identified as the number of computational bits that move from stored energy states to momentum states to define a highly probable or typical equilibrium state. When reversibility applies, from Landauer’s principle, it costs k B l n 2 T Joules to move a bit within the system from stored energy states to the momentum states. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
15 pages, 952 KiB  
Article
Some Consequences of the Thermodynamic Cost of System Identification
by Chris Fields
Independent Researcher, 23 rue des Lavandières, 11160 Caunes Minervois, France
Entropy 2018, 20(10), 797; https://doi.org/10.3390/e20100797 - 17 Oct 2018
Cited by 15 | Viewed by 3410
Abstract
The concept of a “system” is foundational to physics, but the question of how observers identify systems is seldom addressed. Classical thermodynamics restricts observers to finite, finite-resolution observations with which to identify the systems on which “pointer state” measurements are to be made. [...] Read more.
The concept of a “system” is foundational to physics, but the question of how observers identify systems is seldom addressed. Classical thermodynamics restricts observers to finite, finite-resolution observations with which to identify the systems on which “pointer state” measurements are to be made. It is shown that system identification is at best approximate, even in a finite world, and that violations of the Leggett–Garg and Bell/CHSH (Clauser-Horne-Shimony-Holt) inequalities emerge naturally as requirements for successful system identification. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Show Figures

Figure 1

3 pages, 218 KiB  
Correction
Correction: Naudts, J. Quantum Statistical Manifolds. Entropy 2018, 20, 472
by Jan Naudts
Departement Fysica, Universiteit Antwerpen, Universiteitsplein 1, 2610 Wilrijk Antwerpen, Belgium
Entropy 2018, 20(10), 796; https://doi.org/10.3390/e20100796 - 17 Oct 2018
Cited by 5 | Viewed by 2219
Abstract
Section 4 of “Naudts J. Quantum Statistical Manifolds. Entropy 2018, 20, 472” contains errors. They have limited consequences for the remainder of the paper. A new version of this Section is found here. Some smaller shortcomings of the paper are taken care [...] Read more.
Section 4 of “Naudts J. Quantum Statistical Manifolds. Entropy 2018, 20, 472” contains errors. They have limited consequences for the remainder of the paper. A new version of this Section is found here. Some smaller shortcomings of the paper are taken care of as well. In particular, the proof of Theorem 3 was not complete, and is therefore amended. Also, a few missing references are added. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
13 pages, 3650 KiB  
Article
The Interaction Analysis between the Sympathetic and Parasympathetic Systems in CHF by Using Transfer Entropy Method
by Daiyi Luo 1,2,3,†, Weifeng Pan 1,2,3,†, Yifan Li 1,2,3, Kaicheng Feng 1,2,3 and Guanzheng Liu 1,2,3,*
1 School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510275, China
2 Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, School of Engineering, Sun Yat-sen University, Guangzhou 510275, China
3 Guangdong Provincial Engineering and Technology Centre of Advanced and Portable Medical Device, Guangzhou 510275, China
These authors contributed equally to this work.
Entropy 2018, 20(10), 795; https://doi.org/10.3390/e20100795 - 16 Oct 2018
Cited by 12 | Viewed by 4369
Abstract
Congestive heart failure (CHF) is a cardiovascular disease associated with autonomic dysfunction, where sympathovagal imbalance was reported in many studies using heart rate variability (HRV). To learn more about the dynamic interaction in the autonomic nervous system (ANS), we explored the directed interaction [...] Read more.
Congestive heart failure (CHF) is a cardiovascular disease associated with autonomic dysfunction, where sympathovagal imbalance was reported in many studies using heart rate variability (HRV). To learn more about the dynamic interaction in the autonomic nervous system (ANS), we explored the directed interaction between the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS) with the help of transfer entropy (TE). This article included 24-h RR interval signals of 54 healthy subjects (31 males and 23 females, 61.38 ± 11.63 years old) and 44 CHF subjects (8 males and 2 females, 19 subjects’ gender were unknown, 55.51 ± 11.44 years old, 4 in class I, 8 in class II and 32 in class III~IV, according to the New York Heart Association Function Classification), obtained from the PhysioNet database and then segmented into 5-min non-overlapping epochs using cubic spline interpolation. For each segment in the normal group and CHF group, frequency-domain features included low-frequency (LF) power, high-frequency (HF) power and LF/HF ratio were extracted as classical estimators of autonomic activity. In the nonlinear domain, TE between LF and HF were calculated to quantify the information exchanging between SNS and PNS. Compared with the normal group, an extreme decrease in LF/HF ratio (p = 0.000) and extreme increases in both TE(LF→HF) (p = 0.000) and TE(HF→LF) (p = 0.000) in the CHF group were observed. Moreover, both in normal and CHF groups, TE(LF→HF) was a lot greater than TE(HF→LF) (p = 0.000), revealing that TE was able to distinguish the difference in the amount of directed information transfer among ANS. Extracted features were further applied in discriminating CHF using IBM SPSS Statistics discriminant analysis. The combination of the LF/HF ratio, TE(LF→HF) and TE(HF→LF) reached the highest screening accuracy (83.7%). Our results suggested that TE could serve as a complement to traditional index LF/HF in CHF screening. Full article
Show Figures

Figure 1

5 pages, 171 KiB  
Editorial
Evaluation of Systems’ Irregularity and Complexity: Sample Entropy, Its Derivatives, and Their Applications across Scales and Disciplines
by Anne Humeau-Heurtier
Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), University of Angers, 62 avenue Notre-Dame du Lac, 49000 Angers, France
Entropy 2018, 20(10), 794; https://doi.org/10.3390/e20100794 - 16 Oct 2018
Cited by 8 | Viewed by 2809
25 pages, 838 KiB  
Article
An Information-Theoretic Approach to Self-Organisation: Emergence of Complex Interdependencies in Coupled Dynamical Systems
by Fernando Rosas 1,2,3,*, Pedro A.M. Mediano 4, Martín Ugarte 5 and Henrik J. Jensen 1,2,6
1 Department of Mathematics, Imperial College London, London SW7 2AZ, UK
2 Centre of Complexity Science, Imperial College London, London SW7 2AZ, UK
3 Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK
4 Department of Computing, Imperial College London, London SW7 2AZ, UK
5 CoDE Department, Université Libre de Bruxelles, B-1050 Brussels, Belgium
6 Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8502, Japan
Entropy 2018, 20(10), 793; https://doi.org/10.3390/e20100793 - 16 Oct 2018
Cited by 28 | Viewed by 7186
Abstract
Self-organisation lies at the core of fundamental but still unresolved scientific questions, and holds the promise of de-centralised paradigms crucial for future technological developments. While self-organising processes have been traditionally explained by the tendency of dynamical systems to evolve towards specific configurations, or [...] Read more.
Self-organisation lies at the core of fundamental but still unresolved scientific questions, and holds the promise of de-centralised paradigms crucial for future technological developments. While self-organising processes have been traditionally explained by the tendency of dynamical systems to evolve towards specific configurations, or attractors, we see self-organisation as a consequence of the interdependencies that those attractors induce. Building on this intuition, in this work we develop a theoretical framework for understanding and quantifying self-organisation based on coupled dynamical systems and multivariate information theory. We propose a metric of global structural strength that identifies when self-organisation appears, and a multi-layered decomposition that explains the emergent structure in terms of redundant and synergistic interdependencies. We illustrate our framework on elementary cellular automata, showing how it can detect and characterise the emergence of complex structures. Full article
(This article belongs to the Section Complexity)
Show Figures

Graphical abstract

14 pages, 5511 KiB  
Article
Identifying Systemically Important Companies by Using the Credit Network of an Entire Nation
by Sebastian Poledna 1,2, Abraham Hinteregger 2,3 and Stefan Thurner 1,2,3,4,*
1 International Institute for Applied Systems Analysis, Schlossplatz 1, 2361 Laxenburg, Austria
2 Complexity Science Hub Vienna, Josefstädter Straße 39, 1080 Vienna, Austria
3 Section for Science of Complex Systems, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
4 Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
Entropy 2018, 20(10), 792; https://doi.org/10.3390/e20100792 - 16 Oct 2018
Cited by 22 | Viewed by 5576
Abstract
The notions of systemic importance and systemic risk of financial institutions are closely related to the topology of financial liability networks. In this work, we reconstruct and analyze the financial liability network of an entire economy using data of 50,159 firms and banks. [...] Read more.
The notions of systemic importance and systemic risk of financial institutions are closely related to the topology of financial liability networks. In this work, we reconstruct and analyze the financial liability network of an entire economy using data of 50,159 firms and banks. Our analysis contains 80.2% of the total liabilities of firms towards banks and all interbank liabilities in the Austrian banking system. The combination of firm-bank networks and interbank networks allows us to extend the concept of systemic risk to the real economy. In particular, the systemic importance of individual companies can be assessed, and for the first time, the financial ties between the financial and the real economy become explicitly visible. We find that firms contribute to systemic risk in similar ways as banks do. We identify a set of mid-sized companies that carry substantial systemic risk. Their default would affect up to 40% of the Austrian financial market. We find that all firms together create more systemic risk than the entire financial sector. In 2008, the total systemic risk of the Austrian interbank network amounted to only 29% of the total systemic risk of the entire financial network consisting of firms and banks. The work demonstrates that the notions of systemically important financial institutions (SIFIs) can be directly extended to firms. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

12 pages, 2427 KiB  
Editorial
How to Teach Heat Transfer More Systematically by Involving Entropy
by Heinz Herwig
Institute of Thermodynamics, Hamburg University of Technology, D-21073 Hamburg, Germany
Entropy 2018, 20(10), 791; https://doi.org/10.3390/e20100791 - 15 Oct 2018
Cited by 6 | Viewed by 3364
Abstract
In order to teach heat transfer systematically and with a clear physical background, it is recommended that entropy should not be ignored as a fundamental quantity. Heat transfer processes are characterized by introducing the so-called “entropic potential” of the transferred energy, and an [...] Read more.
In order to teach heat transfer systematically and with a clear physical background, it is recommended that entropy should not be ignored as a fundamental quantity. Heat transfer processes are characterized by introducing the so-called “entropic potential” of the transferred energy, and an assessment number is based on this new quantity. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Show Figures

Figure 1

23 pages, 969 KiB  
Article
Maximum Entropy Probability Density Principle in Probabilistic Investigations of Dynamic Systems
by Jiří Náprstek * and Cyril Fischer
Institute of Theoretical and Applied Mechanics, Prosecká 809/76, 190 00 Prague 9, Czech Republic
Entropy 2018, 20(10), 790; https://doi.org/10.3390/e20100790 - 15 Oct 2018
Cited by 7 | Viewed by 3133
Abstract
In this study, we consider a method for investigating the stochastic response of a nonlinear dynamical system affected by a random seismic process. We present the solution of the probability density of a single/multiple-degree of freedom (SDOF/MDOF) system with several statically stable equilibrium [...] Read more.
In this study, we consider a method for investigating the stochastic response of a nonlinear dynamical system affected by a random seismic process. We present the solution of the probability density of a single/multiple-degree of freedom (SDOF/MDOF) system with several statically stable equilibrium states and with possible jumps of the snap-through type. The system is a Hamiltonian system with weak damping excited by a system of non-stationary Gaussian white noise. The solution based on the Gibbs principle of the maximum entropy of probability could potentially be implemented in various branches of engineering. The search for the extreme of the Gibbs entropy functional is formulated as a constrained optimization problem. The secondary constraints follow from the Fokker–Planck equation (FPE) for the system considered or from the system of ordinary differential equations for the stochastic moments of the response derived from the relevant FPE. In terms of the application type, this strategy is most suitable for SDOF/MDOF systems containing polynomial type nonlinearities. Thus, the solution links up with the customary formulation of the finite elements discretization for strongly nonlinear continuous systems. Full article
Show Figures

Figure 1

13 pages, 4689 KiB  
Article
Alternation of Defects and Phase Turbulence Induces Extreme Events in an Extended Microcavity Laser
by Sylvain Barbay 1,*, Saliya Coulibaly 2 and Marcel G. Clerc 3
1 Centre de Nanosciences et de Nanotechnologies, CNRS, Université Paris-Sud, Université Paris-Saclay, Avenue de la Vauve, 91120 Palaiseau, France
2 Université de Lille, CNRS, UMR 8523-PhLAM—Physique des Lasers Atomes et Molécules, F-59000 Lille, France
3 Departamento de Física and Millennium Institute for Research in Optics, FCFM, Universidad de Chile, Casilla 487-3, 8370456 Santiago, Chile
Entropy 2018, 20(10), 789; https://doi.org/10.3390/e20100789 - 15 Oct 2018
Cited by 6 | Viewed by 3075
Abstract
Out-of-equilibrium systems exhibit complex spatiotemporal behaviors when they present a secondary bifurcation to an oscillatory instability. Here, we investigate the complex dynamics shown by a pulsing regime in an extended, one-dimensional semiconductor microcavity laser whose cavity is composed by integrated gain and saturable [...] Read more.
Out-of-equilibrium systems exhibit complex spatiotemporal behaviors when they present a secondary bifurcation to an oscillatory instability. Here, we investigate the complex dynamics shown by a pulsing regime in an extended, one-dimensional semiconductor microcavity laser whose cavity is composed by integrated gain and saturable absorber media. This system is known to give rise experimentally and theoretically to extreme events characterized by rare and high amplitude optical pulses following the onset of spatiotemporal chaos. Based on a theoretical model, we reveal a dynamical behavior characterized by the chaotic alternation of phase and amplitude turbulence. The highest amplitude pulses, i.e., the extreme events, are observed in the phase turbulence zones. This chaotic alternation behavior between different turbulent regimes is at contrast to what is usually observed in a generic amplitude equation model such as the Ginzburg–Landau model. Hence, these regimes provide some insight into the poorly known properties of the complex spatiotemporal dynamics exhibited by secondary instabilities of an Andronov–Hopf bifurcation. Full article
Show Figures

Figure 1

13 pages, 335 KiB  
Article
A Fast Feature Selection Algorithm by Accelerating Computation of Fuzzy Rough Set-Based Information Entropy
by Xiao Zhang 1,*, Xia Liu 1 and Yanyan Yang 2
1 Department of Applied Mathematics, School of Sciences, Xi’an University of Technology, Xi’an 710048, China
2 Department of Automation, Tsinghua University, Beijing 100084, China
Entropy 2018, 20(10), 788; https://doi.org/10.3390/e20100788 - 13 Oct 2018
Cited by 8 | Viewed by 3035
Abstract
The information entropy developed by Shannon is an effective measure of uncertainty in data, and the rough set theory is a useful tool of computer applications to deal with vagueness and uncertainty data circumstances. At present, the information entropy has been extensively applied [...] Read more.
The information entropy developed by Shannon is an effective measure of uncertainty in data, and the rough set theory is a useful tool of computer applications to deal with vagueness and uncertainty data circumstances. At present, the information entropy has been extensively applied in the rough set theory, and different information entropy models have also been proposed in rough sets. In this paper, based on the existing feature selection method by using a fuzzy rough set-based information entropy, a corresponding fast algorithm is provided to achieve efficient implementation, in which the fuzzy rough set-based information entropy taking as the evaluation measure for selecting features is computed by an improved mechanism with lower complexity. The essence of the acceleration algorithm is to use iterative reduced instances to compute the lambda-conditional entropy. Numerical experiments are further conducted to show the performance of the proposed fast algorithm, and the results demonstrate that the algorithm acquires the same feature subset to its original counterpart, but with significantly less time. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

16 pages, 314 KiB  
Article
Variations à la Fourier-Weyl-Wigner on Quantizations of the Plane and the Half-Plane
by Hervé Bergeron 1,* and Jean-Pierre Gazeau 2
1 ISMO, UMR 8214 CNRS, Université Paris-Sud, 91405 Orsay, France
2 APC, UMR 7164 CNRS, Université Paris Diderot, Sorbonne Paris Cité, 75205 Paris, France
Entropy 2018, 20(10), 787; https://doi.org/10.3390/e20100787 - 13 Oct 2018
Cited by 9 | Viewed by 3224
Abstract
Any quantization maps linearly function on a phase space to symmetric operators in a Hilbert space. Covariant integral quantization combines operator-valued measure with the symmetry group of the phase space. Covariant means that the quantization map intertwines classical (geometric operation) and quantum (unitary [...] Read more.
Any quantization maps linearly function on a phase space to symmetric operators in a Hilbert space. Covariant integral quantization combines operator-valued measure with the symmetry group of the phase space. Covariant means that the quantization map intertwines classical (geometric operation) and quantum (unitary transformations) symmetries. Integral means that we use all resources of integral calculus, in order to implement the method when we apply it to singular functions, or distributions, for which the integral calculus is an essential ingredient. We first review this quantization scheme before revisiting the cases where symmetry covariance is described by the Weyl-Heisenberg group and the affine group respectively, and we emphasize the fundamental role played by Fourier transform in both cases. As an original outcome of our generalisations of the Wigner-Weyl transform, we show that many properties of the Weyl integral quantization, commonly viewed as optimal, are actually shared by a large family of integral quantizations. Full article
15 pages, 1836 KiB  
Article
A QUBO Formulation of the Stereo Matching Problem for D-Wave Quantum Annealers
by William Cruz-Santos 1, Salvador E. Venegas-Andraca 2,* and Marco Lanzagorta 3
1 CU-UAEM Valle de Chalco, Hermenegildo Galeana 3, Valle de Chalco 56615, Estado de México, Mexico
2 Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias. Ave., Eugenio Garza Sada 2501, Monterrey 64849, NL, Mexico
3 US Naval Research Laboratory, 4555 Overlook Ave., SW Washington, DC 20375, USA
Entropy 2018, 20(10), 786; https://doi.org/10.3390/e20100786 - 12 Oct 2018
Cited by 12 | Viewed by 6757
Abstract
In this paper, we propose a methodology to solve the stereo matching problem through quantum annealing optimization. Our proposal takes advantage of the existing Min-Cut/Max-Flow network formulation of computer vision problems. Based on this network formulation, we construct a quadratic pseudo-Boolean function and [...] Read more.
In this paper, we propose a methodology to solve the stereo matching problem through quantum annealing optimization. Our proposal takes advantage of the existing Min-Cut/Max-Flow network formulation of computer vision problems. Based on this network formulation, we construct a quadratic pseudo-Boolean function and then optimize it through the use of the D-Wave quantum annealing technology. Experimental validation using two kinds of stereo pair of images, random dot stereograms and gray-scale, shows that our methodology is effective. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 3350 KiB  
Article
Colombian Export Capabilities: Building the Firms-Products Network
by Matteo Bruno 1, Fabio Saracco 1,*, Tiziano Squartini 1 and Marco Dueñas 2
1 IMT School for Advanced Studies, P.zza S. Francesco 19, 55100 Lucca, Italy
2 Department of Economics, International Trade and Social Policy, Universidad de Bogotá Jorge Tadeo Lozano, Cra. 4, 22-61, 110311 Bogotá, Cundinamarca, Colombia
Entropy 2018, 20(10), 785; https://doi.org/10.3390/e20100785 - 12 Oct 2018
Cited by 10 | Viewed by 3786
Abstract
In this paper, we analyse the bipartite Colombian firms-products network, throughout a period of five years, from 2010 to 2014. Our analysis depicts a strongly modular system, with several groups of firms specializing in the export of specific categories of products. These clusters [...] Read more.
In this paper, we analyse the bipartite Colombian firms-products network, throughout a period of five years, from 2010 to 2014. Our analysis depicts a strongly modular system, with several groups of firms specializing in the export of specific categories of products. These clusters have been detected by running the bipartite variant of the traditional modularity maximization, revealing a bi-modular structure. Interestingly, this finding is refined by applying a recently proposed algorithm for projecting bipartite networks on the layer of interest and, then, running the Louvain algorithm on the resulting monopartite representations. Important structural differences emerge upon comparing the Colombian firms-products network with the World Trade Web, in particular, the bipartite representation of the latter is not characterized by a similar block-structure, as the modularity maximization fails in revealing (bipartite) nodes clusters. This points out that economic systems behave differently at different scales: while countries tend to diversify their production—potentially exporting a large number of different products—firms specialize in exporting (substantially very limited) baskets of basically homogeneous products. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

28 pages, 353 KiB  
Article
Entropy Inequalities for Lattices
by Peter Harremoës
1 Copenhagen Business College, Nørre Voldgade 34, 1358 Copenhagen K, Denmark
Current address: Rønne Alle 1, st., 2860 Søborg, Denmark.
Entropy 2018, 20(10), 784; https://doi.org/10.3390/e20100784 - 12 Oct 2018
Cited by 2 | Viewed by 3886
Abstract
We study entropy inequalities for variables that are related by functional dependencies. Although the powerset on four variables is the smallest Boolean lattice with non-Shannon inequalities, there exist lattices with many more variables where the Shannon inequalities are sufficient. We search for conditions [...] Read more.
We study entropy inequalities for variables that are related by functional dependencies. Although the powerset on four variables is the smallest Boolean lattice with non-Shannon inequalities, there exist lattices with many more variables where the Shannon inequalities are sufficient. We search for conditions that exclude the existence of non-Shannon inequalities. The existence of non-Shannon inequalities is related to the question of whether a lattice is isomorphic to a lattice of subgroups of a group. In order to formulate and prove the results, one has to bridge lattice theory, group theory, the theory of functional dependences and the theory of conditional independence. It is demonstrated that the Shannon inequalities are sufficient for planar modular lattices. The proof applies a gluing technique that uses that if the Shannon inequalities are sufficient for the pieces, then they are also sufficient for the whole lattice. It is conjectured that the Shannon inequalities are sufficient if and only if the lattice does not contain a special lattice as a sub-semilattice. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Show Figures

Figure 1

15 pages, 915 KiB  
Article
A New and Stable Estimation Method of Country Economic Fitness and Product Complexity
by Vito D. P. Servedio 1,*, Paolo Buttà 2, Dario Mazzilli 3, Andrea Tacchella 4 and Luciano Pietronero 3
1 Complexity Science Hub Vienna, Josefstätter-Strasse 39, A-1080 Vienna, Austria
2 Department of Mathematics, Sapienza University of Rome, P.le Aldo Moro 5, 00185 Roma, Italy
3 Physics Department, Sapienza University of Rome, P.le Aldo Moro 5, 00185 Roma, Italy
4 Institute for Complex Systems, CNR, Via dei Taurini 19, 00185 Rome, Italy
Entropy 2018, 20(10), 783; https://doi.org/10.3390/e20100783 - 12 Oct 2018
Cited by 19 | Viewed by 3970
Abstract
We present a new metric estimating fitness of countries and complexity of products by exploiting a non-linear non-homogeneous map applied to the publicly available information on the goods exported by a country. The non homogeneous terms guarantee both convergence and stability. After a [...] Read more.
We present a new metric estimating fitness of countries and complexity of products by exploiting a non-linear non-homogeneous map applied to the publicly available information on the goods exported by a country. The non homogeneous terms guarantee both convergence and stability. After a suitable rescaling of the relevant quantities, the non homogeneous terms are eventually set to zero so that this new metric is parameter free. This new map almost reproduces the results of the original homogeneous metrics already defined in literature and allows for an approximate analytic solution in case of actual binarized matrices based on the Revealed Comparative Advantage (RCA) indicator. This solution is connected with a new quantity describing the neighborhood of nodes in bipartite graphs, representing in this work the relations between countries and exported products. Moreover, we define the new indicator of country net-efficiency quantifying how a country efficiently invests in capabilities able to generate innovative complex high quality products. Eventually, we demonstrate analytically the local convergence of the algorithm involved. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

19 pages, 1613 KiB  
Article
From Nash Equilibria to Chain Recurrent Sets: An Algorithmic Solution Concept for Game Theory
by Christos Papadimitriou 1,*,† and Georgios Piliouras 2,*,†
1 Computer Science Department, Columbia University, New York, NY 10027, USA
2 Engineering Systems and Design (ESD), Singapore University of Technology and Design, Singapore 487372, Singapore
This paper is an extended version of our paper published in 2016 ACM Conference on Innovations in Theoretical Computer Science (ITCS), Cambridge, MA, USA, 14–16 January 2016.
Entropy 2018, 20(10), 782; https://doi.org/10.3390/e20100782 - 12 Oct 2018
Cited by 12 | Viewed by 5400
Abstract
In 1950, Nash proposed a natural equilibrium solution concept for games hence called Nash equilibrium, and proved that all finite games have at least one. The proof is through a simple yet ingenious application of Brouwer’s (or, in another version Kakutani’s) fixed point [...] Read more.
In 1950, Nash proposed a natural equilibrium solution concept for games hence called Nash equilibrium, and proved that all finite games have at least one. The proof is through a simple yet ingenious application of Brouwer’s (or, in another version Kakutani’s) fixed point theorem, the most sophisticated result in his era’s topology—in fact, recent algorithmic work has established that Nash equilibria are computationally equivalent to fixed points. In this paper, we propose a new class of universal non-equilibrium solution concepts arising from an important theorem in the topology of dynamical systems that was unavailable to Nash. This approach starts with both a game and a learning dynamics, defined over mixed strategies. The Nash equilibria are fixpoints of the dynamics, but the system behavior is captured by an object far more general than the Nash equilibrium that is known in dynamical systems theory as chain recurrent set. Informally, once we focus on this solution concept—this notion of “the outcome of the game”—every game behaves like a potential game with the dynamics converging to these states. In other words, unlike Nash equilibria, this solution concept is algorithmic in the sense that it has a constructive proof of existence. We characterize this solution for simple benchmark games under replicator dynamics, arguably the best known evolutionary dynamics in game theory. For (weighted) potential games, the new concept coincides with the fixpoints/equilibria of the dynamics. However, in (variants of) zero-sum games with fully mixed (i.e., interior) Nash equilibria, it covers the whole state space, as the dynamics satisfy specific information theoretic constants of motion. We discuss numerous novel computational, as well as structural, combinatorial questions raised by this chain recurrence conception of games. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Show Figures

Figure 1

19 pages, 2139 KiB  
Article
Quantum-Inspired Evolutionary Approach for the Quadratic Assignment Problem
by Wojciech Chmiel * and Joanna Kwiecień
Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland
Entropy 2018, 20(10), 781; https://doi.org/10.3390/e20100781 - 12 Oct 2018
Cited by 15 | Viewed by 4457
Abstract
The paper focuses on the opportunity of the application of the quantum-inspired evolutionary algorithm for determining minimal costs of the assignment in the quadratic assignment problem. The idea behind the paper is to present how the algorithm has to be adapted to this [...] Read more.
The paper focuses on the opportunity of the application of the quantum-inspired evolutionary algorithm for determining minimal costs of the assignment in the quadratic assignment problem. The idea behind the paper is to present how the algorithm has to be adapted to this problem, including crossover and mutation operators and introducing quantum principles in particular procedures. The results have shown that the performance of our approach in terms of converging to the best solutions is satisfactory. Moreover, we have presented the results of the selected parameters of the approach on the quality of the obtained solutions. Full article
(This article belongs to the Special Issue Quantum Walks and Related Issues)
Show Figures

Figure 1

32 pages, 3206 KiB  
Article
Bouncing Oil Droplets, de Broglie’s Quantum Thermostat, and Convergence to Equilibrium
by Mohamed Hatifi 1,*, Ralph Willox 2, Samuel Colin 3 and Thomas Durt 1
1 Aix Marseille Université, CNRS, Centrale Marseille, Institut Fresnel UMR 7249, 13013 Marseille, France
2 Graduate School of Mathematical Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan
3 Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, RJ, Brazil
Entropy 2018, 20(10), 780; https://doi.org/10.3390/e20100780 - 11 Oct 2018
Cited by 6 | Viewed by 5764
Abstract
Recently, the properties of bouncing oil droplets, also known as “walkers,” have attracted much attention because they are thought to offer a gateway to a better understanding of quantum behavior. They indeed constitute a macroscopic realization of wave-particle duality, in the sense that [...] Read more.
Recently, the properties of bouncing oil droplets, also known as “walkers,” have attracted much attention because they are thought to offer a gateway to a better understanding of quantum behavior. They indeed constitute a macroscopic realization of wave-particle duality, in the sense that their trajectories are guided by a self-generated surrounding wave. The aim of this paper is to try to describe walker phenomenology in terms of de Broglie–Bohm dynamics and of a stochastic version thereof. In particular, we first study how a stochastic modification of the de Broglie pilot-wave theory, à la Nelson, affects the process of relaxation to quantum equilibrium, and we prove an H-theorem for the relaxation to quantum equilibrium under Nelson-type dynamics. We then compare the onset of equilibrium in the stochastic and the de Broglie–Bohm approaches and we propose some simple experiments by which one can test the applicability of our theory to the context of bouncing oil droplets. Finally, we compare our theory to actual observations of walker behavior in a 2D harmonic potential well. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

19 pages, 2822 KiB  
Article
From Physics to Bioengineering: Microbial Cultivation Process Design and Feeding Rate Control Based on Relative Entropy Using Nuisance Time
by Renaldas Urniezius *, Vytautas Galvanauskas, Arnas Survyla, Rimvydas Simutis and Donatas Levisauskas
Department of Automation, Kaunas University of Technology, Kaunas LT-51367, Lithuania
Entropy 2018, 20(10), 779; https://doi.org/10.3390/e20100779 - 11 Oct 2018
Cited by 12 | Viewed by 5057
Abstract
For historic reasons, industrial knowledge of reproducibility and restrictions imposed by regulations, open-loop feeding control approaches dominate in industrial fed-batch cultivation processes. In this study, a generic gray box biomass modeling procedure uses relative entropy as a key to approach the posterior similarly [...] Read more.
For historic reasons, industrial knowledge of reproducibility and restrictions imposed by regulations, open-loop feeding control approaches dominate in industrial fed-batch cultivation processes. In this study, a generic gray box biomass modeling procedure uses relative entropy as a key to approach the posterior similarly to how prior distribution approaches the posterior distribution by the multivariate path of Lagrange multipliers, for which a description of a nuisance time is introduced. The ultimate purpose of this study was to develop a numerical semi-global convex optimization procedure that is dedicated to the calculation of feeding rate time profiles during the fed-batch cultivation processes. The proposed numerical semi-global convex optimization of relative entropy is neither restricted to the gray box model nor to the bioengineering application. From the bioengineering application perspective, the proposed bioprocess design technique has benefits for both the regular feed-forward control and the advanced adaptive control systems, in which the model for biomass growth prediction is compulsory. After identification of the gray box model parameters, the options and alternatives in controllable industrial biotechnological processes are described. The main aim of this work is to achieve high reproducibility, controllability, and desired process performance. Glucose concentration measurements, which were used for the development of the model, become unnecessary for the development of the desired microbial cultivation process. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop