Next Issue
Volume 22, April
Previous Issue
Volume 22, February

Table of Contents

Entropy, Volume 22, Issue 3 (March 2020) – 115 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Many efforts have been made to develop models and parameters to predict the glass-forming ability [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessReview
New Invariant Expressions in Chemical Kinetics
Entropy 2020, 22(3), 373; https://doi.org/10.3390/e22030373 - 24 Mar 2020
Viewed by 295
Abstract
This paper presents a review of our original results obtained during the last decade. These results have been found theoretically for classical mass-action-law models of chemical kinetics and justified experimentally. In contrast with the traditional invariances, they relate to a special battery of [...] Read more.
This paper presents a review of our original results obtained during the last decade. These results have been found theoretically for classical mass-action-law models of chemical kinetics and justified experimentally. In contrast with the traditional invariances, they relate to a special battery of kinetic experiments, not a single experiment. Two types of invariances are distinguished and described in detail: thermodynamic invariants, i.e., special combinations of kinetic dependences that yield the equilibrium constants, or simple functions of the equilibrium constants; and “mixed” kinetico-thermodynamic invariances, functions both of equilibrium constants and non-thermodynamic ratios of kinetic coefficients. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

Open AccessReview
On the Evidence of Thermodynamic Self-Organization during Fatigue: A Review
Entropy 2020, 22(3), 372; https://doi.org/10.3390/e22030372 - 24 Mar 2020
Viewed by 322
Abstract
In this review paper, the evidence and application of thermodynamic self-organization are reviewed for metals typically with single crystals subjected to cyclic loading. The theory of self-organization in thermodynamic processes far from equilibrium is a cutting-edge theme for the development of a new [...] Read more.
In this review paper, the evidence and application of thermodynamic self-organization are reviewed for metals typically with single crystals subjected to cyclic loading. The theory of self-organization in thermodynamic processes far from equilibrium is a cutting-edge theme for the development of a new generation of materials. It could be interpreted as the formation of globally coherent patterns, configurations and orderliness through local interactivities by “cascade evolution of dissipative structures”. Non-equilibrium thermodynamics, entropy, and dissipative structures connected to self-organization phenomenon (patterning, orderliness) are briefly discussed. Some example evidences are reviewed in detail to show how thermodynamics self-organization can emerge from a non-equilibrium process; fatigue. Evidences including dislocation density evolution, stored energy, temperature, and acoustic signals can be considered as the signature of self-organization. Most of the attention is given to relate an analogy between persistent slip bands (PSBs) and self-organization in metals with single crystals. Some aspects of the stability of dislocations during fatigue of single crystals are discussed using the formulation of excess entropy generation. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Figure 1

Open AccessEditorial
Entropy in Foundations of Quantum Physics
Entropy 2020, 22(3), 371; https://doi.org/10.3390/e22030371 - 24 Mar 2020
Viewed by 360
Abstract
Entropy can be used in studies on foundations of quantum physics in many different ways, each of them using different properties of this mathematical object [...] Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Open AccessArticle
Theory, Analysis, and Applications of the Entropic Lattice Boltzmann Model for Compressible Flows
Entropy 2020, 22(3), 370; https://doi.org/10.3390/e22030370 - 24 Mar 2020
Viewed by 288
Abstract
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both [...] Read more.
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both these problems are presented by modifying the original lattices without increasing the number of discrete velocities and without altering the numerical algorithm. In order to increase the Mach number, we employ shifted lattices while the magnitude of lattice speeds is increased in order to extend the temperature range. Accuracy and efficiency of the shifted lattices are demonstrated with simulations of the supersonic flow field around a diamond-shaped and NACA0012 airfoil, the subsonic, transonic, and supersonic flow field around the Busemann biplane, and the interaction of vortices with a planar shock wave. For the lattices with extended temperature range, the model is validated with the simulation of the Richtmyer–Meshkov instability. We also discuss some key ideas of how to reduce the number of discrete speeds in three-dimensional simulations by pruning of the higher-order lattices, and introduce a new construction of the corresponding guided equilibrium by entropy minimization. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

Open AccessArticle
PGNet: Pipeline Guidance for Human Key-Point Detection
Entropy 2020, 22(3), 369; https://doi.org/10.3390/e22030369 - 24 Mar 2020
Viewed by 213
Abstract
Human key-point detection is a challenging research field in computer vision. Convolutional neural models limit the number of parameters and mine the local structure, and have made great progress in significant target detection and key-point detection. However, the features extracted by shallow layers [...] Read more.
Human key-point detection is a challenging research field in computer vision. Convolutional neural models limit the number of parameters and mine the local structure, and have made great progress in significant target detection and key-point detection. However, the features extracted by shallow layers mainly contain a lack of semantic information, while the features extracted by deep layers contain rich semantic information but a lack of spatial information that results in information imbalance and feature extraction imbalance. With the complexity of the network structure and the increasing amount of computation, the balance between the time of communication and the time of calculation highlights the importance. Based on the improvement of hardware equipment, network operation time is greatly improved by optimizing the network structure and data operation methods. However, as the network structure becomes deeper and deeper, the communication consumption between networks also increases, and network computing capacity is optimized. In addition, communication overhead is also the focus of recent attention. We propose a novel network structure PGNet, which contains three parts: pipeline guidance strategy (PGS); Cross-Distance-IoU Loss (CIoU); and Cascaded Fusion Feature Model (CFFM). Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

Open AccessArticle
Entropy as a Measure of Attractiveness and Socioeconomic Complexity in Rio de Janeiro Metropolitan Area
Entropy 2020, 22(3), 368; https://doi.org/10.3390/e22030368 - 23 Mar 2020
Viewed by 437
Abstract
Defining and measuring spatial inequalities across the urban environment remains a complex and elusive task which has been facilitated by the increasing availability of large geolocated databases. In this study, we rely on a mobile phone dataset and an entropy-based metric to measure [...] Read more.
Defining and measuring spatial inequalities across the urban environment remains a complex and elusive task which has been facilitated by the increasing availability of large geolocated databases. In this study, we rely on a mobile phone dataset and an entropy-based metric to measure the attractiveness of a location in the Rio de Janeiro Metropolitan Area (Brazil) as the diversity of visitors’ location of residence. The results show that the attractiveness of a given location measured by entropy is an important descriptor of the socioeconomic status of the location, and can thus be used as a proxy for complex socioeconomic indicators. Full article
(This article belongs to the Special Issue Information Theory for Human and Social Processes)
Show Figures

Figure 1

Open AccessArticle
Compound Fault Diagnosis of Rolling Bearing Based on Singular Negentropy Difference Spectrum and Integrated Fast Spectral Correlation
Entropy 2020, 22(3), 367; https://doi.org/10.3390/e22030367 - 23 Mar 2020
Viewed by 278
Abstract
Compound fault diagnosis is challenging due to the complexity, diversity and non-stationary characteristics of mechanical complex faults. In this paper, a novel compound fault separation method based on singular negentropy difference spectrum (SNDS) and integrated fast spectral correlation (IFSC) is proposed. Firstly, the [...] Read more.
Compound fault diagnosis is challenging due to the complexity, diversity and non-stationary characteristics of mechanical complex faults. In this paper, a novel compound fault separation method based on singular negentropy difference spectrum (SNDS) and integrated fast spectral correlation (IFSC) is proposed. Firstly, the original signal was de-noised by SNDS which improved the noise reduction effect of singular difference spectrum by introducing negative entropy. Secondly, the de-noised signal was analyzed by fast spectral correlation. Finally, IFSC took the fourth-order energy as the index to determine the resonance band and separate the fault features of different single fault. The proposed method is applied to analyze the simulated compound signals and the experimental vibration signals, the results show that the proposed method has excellent performance in the separation of rolling bearing composite faults. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Show Figures

Figure 1

Open AccessArticle
Optimal Digital Implementation of Fractional-Order Models in a Microcontroller
Entropy 2020, 22(3), 366; https://doi.org/10.3390/e22030366 - 23 Mar 2020
Viewed by 339
Abstract
The growing number of operations in implementations of the non-local fractional differentiation operator is cumbersome for real applications with strict performance and memory storage requirements. This demands use of one of the available approximation methods. In this paper, the analysis of the classic [...] Read more.
The growing number of operations in implementations of the non-local fractional differentiation operator is cumbersome for real applications with strict performance and memory storage requirements. This demands use of one of the available approximation methods. In this paper, the analysis of the classic integer- (IO) and fractional-order (FO) models of the brushless DC (BLDC) micromotor mounted on a steel rotating arms, and next, the discretization and efficient implementation of the models in a microcontroller (MCU) is performed. Two different methods for the FO model are examined, including the approximation of the fractional-order operator s ν ( ν R ) using the Oustaloup Recursive filter and the numerical evaluation of the fractional differintegral operator based on the Grünwald–Letnikov definition and Short Memory Principle. The models are verified against the results of several experiments conducted on an ARM Cortex-M7-based STM32F746ZG unit. Additionally, some software optimization techniques for the Cortex-M microcontroller family are discussed. The described steps are universal and can also be easily adapted to any other microcontroller. The values for integral absolute error (IAE) and integral square error (ISE) performance indices, calculated on the basis of simulations performed in MATLAB, are used to evaluate accuracy. Full article
Show Figures

Graphical abstract

Open AccessArticle
Some Remarks about Entropy of Digital Filtered Signals
Entropy 2020, 22(3), 365; https://doi.org/10.3390/e22030365 - 23 Mar 2020
Viewed by 233
Abstract
The finite numerical resolution of digital number representation has an impact on the properties of filters. Much effort has been done to develop efficient digital filters investigating the effects in the frequency response. However, it seems that there is less attention to the [...] Read more.
The finite numerical resolution of digital number representation has an impact on the properties of filters. Much effort has been done to develop efficient digital filters investigating the effects in the frequency response. However, it seems that there is less attention to the influence in the entropy by digital filtered signals due to the finite precision. To contribute in such a direction, this manuscript presents some remarks about the entropy of filtered signals. Three types of filters are investigated: Butterworth, Chebyshev, and elliptic. Using a boundary technique, the parameters of the filters are evaluated according to the word length of 16 or 32 bits. It has been shown that filtered signals have their entropy increased even if the filters are linear. A significant positive correlation (p < 0.05) was observed between order and Shannon entropy of the filtered signal using the elliptic filter. Comparing to signal-to-noise ratio, entropy seems more efficient at detecting the increasing of noise in a filtered signal. Such knowledge can be used as an additional condition for designing digital filters. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Research on the Node Importance of a Weighted Network Based on the K-Order Propagation Number Algorithm
Entropy 2020, 22(3), 364; https://doi.org/10.3390/e22030364 - 22 Mar 2020
Viewed by 325
Abstract
To describe both the global and local characteristics of a network more comprehensively, we propose the weighted K-order propagation number (WKPN) algorithm to extract the disease propagation based on the network topology to evaluate the node importance. Each node is set as [...] Read more.
To describe both the global and local characteristics of a network more comprehensively, we propose the weighted K-order propagation number (WKPN) algorithm to extract the disease propagation based on the network topology to evaluate the node importance. Each node is set as the source of infection, and the total number of infected nodes is defined as the K-order propagation number after experiencing the propagation time K. The simulation of the symmetric network with bridge nodes indicated that the WKPN algorithm was more effective for evaluation of the algorithm features. A deliberate attack strategy, which indicated an attack on the network according to the node importance from high to low, was employed to evaluate the WKPN algorithm in real networks. Compared with the other methods tested, the results demonstrate the applicability and advancement that a lower number of nodes, with a higher importance calculated by the K-order propagation number algorithm, has to achieve full damage to the network structure. Full article
(This article belongs to the Special Issue Entropic Forces in Complex Systems)
Show Figures

Graphical abstract

Open AccessArticle
Numerical Analysis on Natural Convection Heat Transfer in a Single Circular Fin-Tube Heat Exchanger (Part 1): Numerical Method
Entropy 2020, 22(3), 363; https://doi.org/10.3390/e22030363 - 21 Mar 2020
Viewed by 325
Abstract
In this research, unsteady three-dimensional incompressible Navier–Stokes equations are solved to simulate experiments with the Boussinesq approximation and validate the proposed numerical model for the design of a circular fin-tube heat exchanger. Unsteady time marching is proposed for a time sweeping analysis of [...] Read more.
In this research, unsteady three-dimensional incompressible Navier–Stokes equations are solved to simulate experiments with the Boussinesq approximation and validate the proposed numerical model for the design of a circular fin-tube heat exchanger. Unsteady time marching is proposed for a time sweeping analysis of various Rayleigh numbers. The accuracy of the natural convection data of a single horizontal circular tube with the proposed numerical method can be guaranteed when the Rayleigh number based on the tube diameter exceeds 400, which is regarded as the limitation of numerical errors due to instability. Moreover, the effective limit for a circular fin-tube heat exchanger is reached when the Rayleigh number based on the fin gap size ( Ra s ) is equal to or exceeds 100. This is because at low Rayleigh numbers, the air gap between the fins is isolated and rarely affected by natural convection of the outer air, where the fluid provides heat resistance. Thus, the fin acts favorably when Ra s exceeds 100. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessReview
Particle Swarm Optimisation: A Historical Review Up to the Current Developments
Entropy 2020, 22(3), 362; https://doi.org/10.3390/e22030362 - 21 Mar 2020
Viewed by 327
Abstract
The Particle Swarm Optimisation (PSO) algorithm was inspired by the social and biological behaviour of bird flocks searching for food sources. In this nature-based algorithm, individuals are referred to as particles and fly through the search space seeking for the global best position [...] Read more.
The Particle Swarm Optimisation (PSO) algorithm was inspired by the social and biological behaviour of bird flocks searching for food sources. In this nature-based algorithm, individuals are referred to as particles and fly through the search space seeking for the global best position that minimises (or maximises) a given problem. Today, PSO is one of the most well-known and widely used swarm intelligence algorithms and metaheuristic techniques, because of its simplicity and ability to be used in a wide range of applications. However, in-depth studies of the algorithm have led to the detection and identification of a number of problems with it, especially convergence problems and performance issues. Consequently, a myriad of variants, enhancements and extensions to the original version of the algorithm, developed and introduced in the mid-1990s, have been proposed, especially in the last two decades. In this article, a systematic literature review about those variants and improvements is made, which also covers the hybridisation and parallelisation of the algorithm and its extensions to other classes of optimisation problems, taking into consideration the most important ones. These approaches and improvements are appropriately summarised, organised and presented, in order to allow and facilitate the identification of the most appropriate PSO variant for a particular application. Full article
(This article belongs to the Special Issue Intelligent Tools and Applications in Engineering and Mathematics)
Show Figures

Figure 1

Open AccessArticle
Information Graphs Incorporating Predictive Values of Disease Forecasts
Entropy 2020, 22(3), 361; https://doi.org/10.3390/e22030361 - 20 Mar 2020
Viewed by 445
Abstract
Diagrammatic formats are useful for summarizing the processes of evaluation and comparison of forecasts in plant pathology and other disciplines where decisions about interventions for the purpose of disease management are often based on a proxy risk variable. We describe a new diagrammatic [...] Read more.
Diagrammatic formats are useful for summarizing the processes of evaluation and comparison of forecasts in plant pathology and other disciplines where decisions about interventions for the purpose of disease management are often based on a proxy risk variable. We describe a new diagrammatic format for disease forecasts with two categories of actual status and two categories of forecast. The format displays relative entropies, functions of the predictive values that characterize expected information provided by disease forecasts. The new format arises from a consideration of earlier formats with underlying information properties that were previously unexploited. The new diagrammatic format requires no additional data for calculation beyond those used for the calculation of a receiver operating characteristic (ROC) curve. While an ROC curve characterizes a forecast in terms of sensitivity and specificity, the new format described here characterizes a forecast in terms of relative entropies based on predictive values. Thus it is complementary to ROC methodology in its application to the evaluation and comparison of forecasts. Full article
(This article belongs to the Special Issue Applications of Information Theory to Epidemiology)
Show Figures

Figure 1

Open AccessArticle
Bundled Causal History Interaction
Entropy 2020, 22(3), 360; https://doi.org/10.3390/e22030360 - 20 Mar 2020
Viewed by 312
Abstract
Complex systems arise as a result of the nonlinear interactions between components. In particular, the evolutionary dynamics of a multivariate system encodes the ways in which different variables interact with each other individually or in groups. One fundamental question that remains unanswered is: [...] Read more.
Complex systems arise as a result of the nonlinear interactions between components. In particular, the evolutionary dynamics of a multivariate system encodes the ways in which different variables interact with each other individually or in groups. One fundamental question that remains unanswered is: How do two non-overlapping multivariate subsets of variables interact to causally determine the outcome of a specific variable? Here, we provide an information-based approach to address this problem. We delineate the temporal interactions between the bundles in a probabilistic graphical model. The strength of the interactions, captured by partial information decomposition, then exposes complex behavior of dependencies and memory within the system. The proposed approach successfully illustrated complex dependence between cations and anions as determinants of pH in an observed stream chemistry system. In the studied catchment, the dynamics of pH is a result of both cations and anions through mainly synergistic effects of the two and their individual influences as well. This example demonstrates the potentially broad applicability of the approach, establishing the foundation to study the interaction between groups of variables in a range of complex systems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Underdetermined DOA Estimation for Wideband Signals via Focused Atomic Norm Minimization
Entropy 2020, 22(3), 359; https://doi.org/10.3390/e22030359 - 20 Mar 2020
Viewed by 276
Abstract
In underwater acoustic signal processing, direction of arrival (DOA) estimation can provide important information for target tracking and localization. To address underdetermined wideband signal processing in underwater passive detection system, this paper proposes a novel underdetermined wideband DOA estimation method equipped with the [...] Read more.
In underwater acoustic signal processing, direction of arrival (DOA) estimation can provide important information for target tracking and localization. To address underdetermined wideband signal processing in underwater passive detection system, this paper proposes a novel underdetermined wideband DOA estimation method equipped with the nested array (NA) using focused atomic norm minimization (ANM), where the signal source number detection is accomplished by information theory criteria. In the proposed DOA estimation method, especially, after vectoring the covariance matrix of each frequency bin, each corresponding obtained vector is focused into the predefined frequency bin by focused matrix. Then, the collected averaged vector is considered as virtual array model, whose steering vector exhibits the Vandermonde structure in terms of the obtained virtual array geometries. Further, the new covariance matrix is recovered based on ANM by semi-definite programming (SDP), which utilizes the information of the Toeplitz structure. Finally, the Root-MUSIC algorithm is applied to estimate the DOAs. Simulation results show that the proposed method outperforms other underdetermined DOA estimation methods based on information theory in term of higher estimation accuracy. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics)
Show Figures

Figure 1

Open AccessArticle
Numerical Analysis on Natural Convection Heat Transfer in a Single Circular Fin-Tube Heat Exchanger (Part 2): Correlations for Limiting Cases
Entropy 2020, 22(3), 358; https://doi.org/10.3390/e22030358 - 20 Mar 2020
Viewed by 239
Abstract
This research focused on the correlations associated with the physics of natural convection in circular fin-tube models. The limiting conditions are defined by two conditions. The lower limit ( D o / D → 1, s/D = finite value) corresponds to [...] Read more.
This research focused on the correlations associated with the physics of natural convection in circular fin-tube models. The limiting conditions are defined by two conditions. The lower limit ( D o / D → 1, s/D = finite value) corresponds to a horizontal circular tube, while the upper limit ( D o / D → ∞, s/D << 1) corresponds to vertical flat plates. In this paper, we proposed a corrected correlation based on empirical result. The circular fin-tube heat exchanger was divided into the A and B types, the categorizing criteria being D o / D = 1.2 , where D and D o refer to the diameter of the circular tube and the diameter of the circular fin, respectively. Moreover, with the computational fluid dynamics technique used to investigate the limiting conditions, the parametric range was extended substantially in this research for type B, namely 1.2 < D o / D ≤ 10. The complex correlation was also simplified to the form Nu L = C Ra s n , where C and n are the functions of the diameter ratio D o / D . Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
The Design of Global Correlation Quantifiers and Continuous Notions of Statistical Sufficiency
Entropy 2020, 22(3), 357; https://doi.org/10.3390/e22030357 - 19 Mar 2020
Viewed by 341
Abstract
Using first principles from inference, we design a set of functionals for the purposes of ranking joint probability distributions with respect to their correlations. Starting with a general functional, we impose its desired behavior through the Principle of Constant Correlations (PCC), which constrains [...] Read more.
Using first principles from inference, we design a set of functionals for the purposes of ranking joint probability distributions with respect to their correlations. Starting with a general functional, we impose its desired behavior through the Principle of Constant Correlations (PCC), which constrains the correlation functional to behave in a consistent way under statistically independent inferential transformations. The PCC guides us in choosing the appropriate design criteria for constructing the desired functionals. Since the derivations depend on a choice of partitioning the variable space into n disjoint subspaces, the general functional we design is the n-partite information (NPI), of which the total correlation and mutual information are special cases. Thus, these functionals are found to be uniquely capable of determining whether a certain class of inferential transformations, ρ ρ , preserve, destroy or create correlations. This provides conceptual clarity by ruling out other possible global correlation quantifiers. Finally, the derivation and results allow us to quantify non-binary notions of statistical sufficiency. Our results express what percentage of the correlations are preserved under a given inferential transformation or variable mapping. Full article
Open AccessArticle
Impact of General Anesthesia Guided by State Entropy (SE) and Response Entropy (RE) on Perioperative Stability in Elective Laparoscopic Cholecystectomy Patients—A Prospective Observational Randomized Monocentric Study
Entropy 2020, 22(3), 356; https://doi.org/10.3390/e22030356 - 19 Mar 2020
Viewed by 445
Abstract
Laparoscopic cholecystectomy is one of the most frequently performed interventions in general surgery departments. Some of the most important aims in achieving perioperative stability in these patients is diminishing the impact of general anesthesia on the hemodynamic stability and the optimization of anesthetic [...] Read more.
Laparoscopic cholecystectomy is one of the most frequently performed interventions in general surgery departments. Some of the most important aims in achieving perioperative stability in these patients is diminishing the impact of general anesthesia on the hemodynamic stability and the optimization of anesthetic drug doses based on the individual clinical profile of each patient. The objective of this study is the evaluation of the impact, as monitored through entropy (both state entropy (SE) and response entropy (RE)), that the depth of anesthesia has on the hemodynamic stability, as well as the doses of volatile anesthetic. A prospective, observational, randomized, and monocentric study was carried out between January and December 2019 in the Clinic of Anesthesia and Intensive Care of the “Pius Brînzeu” Emergency County Hospital in Timișoara, Romania. The patients included in the study were divided in two study groups: patients in Group A (target group) received multimodal monitoring, which included monitoring of standard parameters and of entropy (SE and RE); while the patients in Group B (control group) only received standard monitoring. The anesthetic dose in group A was optimized to achieve a target entropy of 40–60. A total of 68 patients met the inclusion criteria and were allocated to one of the two study groups: group A (N = 43) or group B (N = 25). There were no statistically significant differences identified between the two groups for both demographical and clinical characteristics (p > 0.05). Statistically significant differences were identified for the number of hypotensive episodes (p = 0.011, 95% CI: [0.1851, 0.7042]) and for the number of episodes of bradycardia (p < 0.0001, 95% CI: [0.3296, 0.7923]). Moreover, there was a significant difference in the Sevoflurane consumption between the two study groups (p = 0.0498, 95% CI: [−0.3942, 0.9047]). The implementation of the multimodal monitoring protocol, including the standard parameters and the measurement of entropy for determining the depth of anesthesia (SE and RE) led to a considerable improvement in perioperative hemodynamic stability. Furthermore, optimizing the doses of anesthetic drugs based on the individual clinical profile of each patient led to a considerable decrease in drug consumption, as well as to a lower incidence of hemodynamic side-effects. Full article
Show Figures

Figure 1

Open AccessArticle
Improved Practical Vulnerability Analysis of Mouse Data According to Offensive Security based on Machine Learning in Image-Based User Authentication
Entropy 2020, 22(3), 355; https://doi.org/10.3390/e22030355 - 18 Mar 2020
Viewed by 325
Abstract
The objective of this study was to verify the feasibility of mouse data exposure by deriving features to improve the accuracy of a mouse data attack technique using machine learning models. To improve the accuracy, the feature appearing between the mouse coordinates input [...] Read more.
The objective of this study was to verify the feasibility of mouse data exposure by deriving features to improve the accuracy of a mouse data attack technique using machine learning models. To improve the accuracy, the feature appearing between the mouse coordinates input from the user was analyzed, which is defined as a feature for machine learning models to derive a method of improving the accuracy. As a result, we found a feature where the distance between the coordinates is concentrated in a specific range. We verified that the mouse data is apt to being stolen more accurately when the distance is used as a feature. An accuracy of over 99% was achieved, which means that the proposed method almost completely classifies the mouse data input from the user and the mouse data generated by the defender. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Open AccessArticle
Radiative MHD Nanofluid Flow over a Moving Thin Needle with Entropy Generation in a Porous Medium with Dust Particles and Hall Current
Entropy 2020, 22(3), 354; https://doi.org/10.3390/e22030354 - 18 Mar 2020
Viewed by 286
Abstract
This paper investigated the behavior of the two-dimensional magnetohydrodynamics (MHD) nanofluid flow of water-based suspended carbon nanotubes (CNTs) with entropy generation and nonlinear thermal radiation in a Darcy–Forchheimer porous medium over a moving horizontal thin needle. The study also incorporated the effects of [...] Read more.
This paper investigated the behavior of the two-dimensional magnetohydrodynamics (MHD) nanofluid flow of water-based suspended carbon nanotubes (CNTs) with entropy generation and nonlinear thermal radiation in a Darcy–Forchheimer porous medium over a moving horizontal thin needle. The study also incorporated the effects of Hall current, magnetohydrodynamics, and viscous dissipation on dust particles. The said flow model was described using high order partial differential equations. An appropriate set of transformations was used to reduce the order of these equations. The reduced system was then solved by using a MATLAB tool bvp4c. The results obtained were compared with the existing literature, and excellent harmony was achieved in this regard. The results were presented using graphs and tables with coherent discussion. It was comprehended that Hall current parameter intensified the velocity profiles for both CNTs. Furthermore, it was perceived that the Bejan number boosted for higher values of Darcy–Forchheimer number. Full article
(This article belongs to the Special Issue Thermal Radiation and Entropy Analysis)
Show Figures

Figure 1

Open AccessArticle
Gaussian Curvature Entropy for Curved Surface Shape Generation
Entropy 2020, 22(3), 353; https://doi.org/10.3390/e22030353 - 18 Mar 2020
Viewed by 351
Abstract
The overall shape features that emerge from combinations of shape elements, such as “complexity” and “order”, are important in designing shapes of industrial products. However, controlling the features of shapes is difficult and depends on the experience and intuition of designers. Among these [...] Read more.
The overall shape features that emerge from combinations of shape elements, such as “complexity” and “order”, are important in designing shapes of industrial products. However, controlling the features of shapes is difficult and depends on the experience and intuition of designers. Among these features, “complexity” is said to have an influence on the “beauty” and “preference” of shapes. This research proposed a Gaussian curvature entropy as a “complexity” index of a curved surface shape. The proposed index is calculated based on Gaussian curvature, which is obtained by the sampling and quantization of a curved surface shape and validated by the sensory evaluation experiment while using two types of sample shapes. The result indicates the correspondence of the index to perceived “complexity” (the determination coefficient is greater than 0.8). Additionally, this research constructed a shape generation method that was based on the index as a car design supporting apparatus, in which the designers can refer many shapes generated by controlling “complexity”. The applicability of the proposed method was confirmed by the experiment while using the generated shapes. Full article
(This article belongs to the Special Issue Intelligent Tools and Applications in Engineering and Mathematics)
Show Figures

Figure 1

Open AccessArticle
Stability Analysis of the Explicit Difference Scheme for Richards Equation
Entropy 2020, 22(3), 352; https://doi.org/10.3390/e22030352 - 18 Mar 2020
Viewed by 274
Abstract
A stable explicit difference scheme, which is based on forward Euler format, is proposed for the Richards equation. To avoid the degeneracy of the Richards equation, we add a perturbation to the functional coefficient of the parabolic term. In addition, we introduce an [...] Read more.
A stable explicit difference scheme, which is based on forward Euler format, is proposed for the Richards equation. To avoid the degeneracy of the Richards equation, we add a perturbation to the functional coefficient of the parabolic term. In addition, we introduce an extra term in the difference scheme which is used to relax the time step restriction for improving the stability condition. With the augmented terms, we prove the stability using the induction method. Numerical experiments show the validity and the accuracy of the scheme, along with its efficiency. Full article
(This article belongs to the Special Issue Applications of Nonlinear Diffusion Equations)
Show Figures

Figure 1

Open AccessArticle
Weighted Mutual Information for Aggregated Kernel Clustering
Entropy 2020, 22(3), 351; https://doi.org/10.3390/e22030351 - 18 Mar 2020
Viewed by 281
Abstract
Background: A common task in machine learning is clustering data into different groups based on similarities. Clustering methods can be divided in two groups: linear and nonlinear. A commonly used linear clustering method is K-means. Its extension, kernel K-means, is a non-linear technique [...] Read more.
Background: A common task in machine learning is clustering data into different groups based on similarities. Clustering methods can be divided in two groups: linear and nonlinear. A commonly used linear clustering method is K-means. Its extension, kernel K-means, is a non-linear technique that utilizes a kernel function to project the data to a higher dimensional space. The projected data will then be clustered in different groups. Different kernels do not perform similarly when they are applied to different datasets. Methods: A kernel function might be relevant for one application but perform poorly to project data for another application. In turn choosing the right kernel for an arbitrary dataset is a challenging task. To address this challenge, a potential approach is aggregating the clustering results to obtain an impartial clustering result regardless of the selected kernel function. To this end, the main challenge is how to aggregate the clustering results. A potential solution is to combine the clustering results using a weight function. In this work, we introduce Weighted Mutual Information (WMI) for calculating the weights for different clustering methods based on their performance to combine the results. The performance of each method is evaluated using a training set with known labels. Results: We applied the proposed Weighted Mutual Information to four data sets that cannot be linearly separated. We also tested the method in different noise conditions. Conclusions: Our results show that the proposed Weighted Mutual Information method is impartial, does not rely on a single kernel, and performs better than each individual kernel specially in high noise. Full article
Show Figures

Figure 1

Open AccessArticle
Invariant-Based Inverse Engineering for Fast and Robust Load Transport in a Double Pendulum Bridge Crane
Entropy 2020, 22(3), 350; https://doi.org/10.3390/e22030350 - 18 Mar 2020
Viewed by 338
Abstract
We set a shortcut-to-adiabaticity strategy to design the trolley motion in a double-pendulum bridge crane. The trajectories found guarantee payload transport without residual excitation regardless of the initial conditions within the small oscillations regime. The results are compared with exact dynamics to set [...] Read more.
We set a shortcut-to-adiabaticity strategy to design the trolley motion in a double-pendulum bridge crane. The trajectories found guarantee payload transport without residual excitation regardless of the initial conditions within the small oscillations regime. The results are compared with exact dynamics to set the working domain of the approach. The method is free from instabilities due to boundary effects or to resonances with the two natural frequencies. Full article
(This article belongs to the Special Issue Shortcuts to Adiabaticity)
Show Figures

Figure 1

Open AccessArticle
A Note on Wavelet-Based Estimator of the Hurst Parameter
Entropy 2020, 22(3), 349; https://doi.org/10.3390/e22030349 - 18 Mar 2020
Viewed by 286
Abstract
The signals in numerous fields usually have scaling behaviors (long-range dependence and self-similarity) which is characterized by the Hurst parameter H. Fractal Brownian motion (FBM) plays an important role in modeling signals with self-similarity and long-range dependence. Wavelet analysis is a common [...] Read more.
The signals in numerous fields usually have scaling behaviors (long-range dependence and self-similarity) which is characterized by the Hurst parameter H. Fractal Brownian motion (FBM) plays an important role in modeling signals with self-similarity and long-range dependence. Wavelet analysis is a common method for signal processing, and has been used for estimation of Hurst parameter. This paper conducts a detailed numerical simulation study in the case of FBM on the selection of parameters and the empirical bias in the wavelet-based estimator which have not been studied comprehensively in previous studies, especially for the empirical bias. The results show that the empirical bias is due to the initialization errors caused by discrete sampling, and is not related to simulation methods. When choosing an appropriate orthogonal compact supported wavelet, the empirical bias is almost not related to the inaccurate bias correction caused by correlations of wavelet coefficients. The latter two causes are studied via comparison of estimators and comparison of simulation methods. These results could be a reference for future studies and applications in the scaling behavior of signals. Some preliminary results of this study have provided a reference for my previous studies. Full article
Show Figures

Figure 1

Open AccessEditorial
Carnot Cycle and Heat Engine: Fundamentals and Applications
Entropy 2020, 22(3), 348; https://doi.org/10.3390/e22030348 - 18 Mar 2020
Viewed by 326
Abstract
After two years of exchange, this specific issue dedicated to the Carnot cycle and thermomechanical engines has been completed with ten papers including this editorial [...] Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Open AccessArticle
A Novel Microwave Treatment for Sleep Disorders and Classification of Sleep Stages Using Multi-Scale Entropy
Entropy 2020, 22(3), 347; https://doi.org/10.3390/e22030347 - 17 Mar 2020
Viewed by 386
Abstract
The aim of this study was to develop an integrated system of non-contact sleep stage detection and sleep disorder treatment for health monitoring. Hence, a method of brain activity detection based on microwave scattering technology instead of scalp electroencephalogram was developed to evaluate [...] Read more.
The aim of this study was to develop an integrated system of non-contact sleep stage detection and sleep disorder treatment for health monitoring. Hence, a method of brain activity detection based on microwave scattering technology instead of scalp electroencephalogram was developed to evaluate the sleep stage. First, microwaves at a specific frequency were used to penetrate the functional sites of the brain in patients with sleep disorders to change the firing frequency of the activated areas of the brain and analyze and evaluate statistically the effects on sleep improvement. Then, a wavelet packet algorithm was used to decompose the microwave transmission signal, the refined composite multiscale sample entropy, the refined composite multiscale fluctuation-based dispersion entropy and multivariate multiscale weighted permutation entropy were obtained as features from the wavelet packet coefficient. Finally, the mutual information-principal component analysis feature selection method was used to optimize the feature set and random forest was used to classify and evaluate the sleep stage. The results show that after four times of microwave modulation treatment, sleep efficiency improved continuously, the overall maintenance was above 80%, and the insomnia rate was reduced gradually. The overall classification accuracy of the four sleep stages was 86.4%. The results indicate that the microwaves with a certain frequency can treat sleep disorders and detect abnormal brain activity. Therefore, the microwave scattering method is of great significance in the development of a new brain disease treatment, diagnosis and clinical application system. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Open AccessArticle
The Truncated Cauchy Power Family of Distributions with Inference and Applications
Entropy 2020, 22(3), 346; https://doi.org/10.3390/e22030346 - 17 Mar 2020
Viewed by 390
Abstract
As a matter of fact, the statistical literature lacks of general family of distributions based on the truncated Cauchy distribution. In this paper, such a family is proposed, called the truncated Cauchy power-G family. It stands out for the originality of the involved [...] Read more.
As a matter of fact, the statistical literature lacks of general family of distributions based on the truncated Cauchy distribution. In this paper, such a family is proposed, called the truncated Cauchy power-G family. It stands out for the originality of the involved functions, its overall simplicity and its desirable properties for modelling purposes. In particular, (i) only one parameter is added to the baseline distribution avoiding the over-parametrization phenomenon, (ii) the related probability functions (cumulative distribution, probability density, hazard rate, and quantile functions) have tractable expressions, and (iii) thanks to the combined action of the arctangent and power functions, the flexible properties of the baseline distribution (symmetry, skewness, kurtosis, etc.) can be really enhanced. These aspects are discussed in detail, with the support of comprehensive numerical and graphical results. Furthermore, important mathematical features of the new family are derived, such as the moments, skewness and kurtosis, two kinds of entropy and order statistics. For the applied side, new models can be created in view of fitting data sets with simple or complex structure. This last point is illustrated by the consideration of the Weibull distribution as baseline, the maximum likelihood method of estimation and two practical data sets wit different skewness properties. The obtained results show that the truncated Cauchy power-G family is very competitive in comparison to other well implanted general families. Full article
Show Figures

Figure 1

Open AccessArticle
On the Rate-Distortion Function of Sampled Cyclostationary Gaussian Processes
Entropy 2020, 22(3), 345; https://doi.org/10.3390/e22030345 - 17 Mar 2020
Viewed by 302
Abstract
Man-made communications signals are typically modelled as continuous-time (CT) wide-sense cyclostationary (WSCS) processes. As modern processing is digital, it is applied to discrete-time (DT) processes obtained by sampling the CT processes. When sampling is applied to a CT WSCS process, the statistics of [...] Read more.
Man-made communications signals are typically modelled as continuous-time (CT) wide-sense cyclostationary (WSCS) processes. As modern processing is digital, it is applied to discrete-time (DT) processes obtained by sampling the CT processes. When sampling is applied to a CT WSCS process, the statistics of the resulting DT process depends on the relationship between the sampling interval and the period of the statistics of the CT process: When these two parameters have a common integer factor, then the DT process is WSCS. This situation is referred to as synchronous sampling. When this is not the case, which is referred to as asynchronous sampling, the resulting DT process is wide-sense almost cyclostationary (WSACS). The sampled CT processes are commonly encoded using a source code to facilitate storage or transmission over wireless networks, e.g., using compress-and-forward relaying. In this work, we study the fundamental tradeoff between rate and distortion for source codes applied to sampled CT WSCS processes, characterized via the rate-distortion function (RDF). We note that while RDF characterization for the case of synchronous sampling directly follows from classic information-theoretic tools utilizing ergodicity and the law of large numbers, when sampling is asynchronous, the resulting process is not information stable. In such cases, the commonly used information-theoretic tools are inapplicable to RDF analysis, which poses a major challenge. Using the information-spectrum framework, we show that the RDF for asynchronous sampling in the low distortion regime can be expressed as the limit superior of a sequence of RDFs in which each element corresponds to the RDF of a synchronously sampled WSCS process (yet their limit is not guaranteed to exist). The resulting characterization allows us to introduce novel insights on the relationship between sampling synchronization and the RDF. For example, we demonstrate that, differently from stationary processes, small differences in the sampling rate and the sampling time offset can notably affect the RDF of sampled CT WSCS processes. Full article
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives)
Show Figures

Figure 1

Open AccessArticle
No-Reference Image Quality Assessment Based on Dual-Domain Feature Fusion
Entropy 2020, 22(3), 344; https://doi.org/10.3390/e22030344 - 17 Mar 2020
Viewed by 293
Abstract
Image quality assessment (IQA) aims to devise computational models to evaluate image quality in a perceptually consistent manner. In this paper, a novel no-reference image quality assessment model based on dual-domain feature fusion is proposed, dubbed as DFF-IQA. Firstly, in the spatial domain, [...] Read more.
Image quality assessment (IQA) aims to devise computational models to evaluate image quality in a perceptually consistent manner. In this paper, a novel no-reference image quality assessment model based on dual-domain feature fusion is proposed, dubbed as DFF-IQA. Firstly, in the spatial domain, several features about weighted local binary pattern, naturalness and spatial entropy are extracted, where the naturalness features are represented by fitting parameters of the generalized Gaussian distribution. Secondly, in the frequency domain, the features of spectral entropy, oriented energy distribution, and fitting parameters of asymmetrical generalized Gaussian distribution are extracted. Thirdly, the features extracted in the dual-domain are fused to form the quality-aware feature vector. Finally, quality regression process by random forest is conducted to build the relationship between image features and quality score, yielding a measure of image quality. The resulting algorithm is tested on the LIVE database and compared with competing IQA models. Experimental results on the LIVE database indicate that the proposed DFF-IQA method is more consistent with the human visual system than other competing IQA methods. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop