-
Bell Diagonal States: Entanglement, Non-Locality, Steering and Discord on the IBM Quantum Computer -
Quantifying Miscibility in Quantum Fluids -
Data-Driven Analysis of Nonlinear Heterogeneous Reactions through Sparse Modeling and Bayesian Statistical Approaches -
Social Accuracy-Risk Trade-Off in Financial Predictions -
Geometric Variational Inference
Journal Description
Entropy
Entropy
is an international and interdisciplinary peer-reviewed open access journal of entropy and information studies, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) and Spanish Society of Biomedical Engineering (SEIB) are affiliated with Entropy and their members receive a discount on the article processing charge.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), MathSciNet, Inspec, PubMed, PMC, and many other databases.
- Journal Rank: JCR - Q2 (Physics, Multidisciplinary) / CiteScore - Q1 (Mathematical Physics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision provided to authors approximately 16.2 days after submission; acceptance to publication is undertaken in 3.4 days (median values for papers published in this journal in the first half of 2021).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our authors say about Entropy.
- Companion journals for Entropy include: Foundations and Thermo.
Impact Factor:
2.524 (2020)
;
5-Year Impact Factor:
2.587 (2020)
Latest Articles
Effects of Cardiac Resynchronization Therapy on Cardio-Respiratory Coupling
Entropy 2021, 23(9), 1126; https://doi.org/10.3390/e23091126 (registering DOI) - 30 Aug 2021
Abstract
In this study, the effect of cardiac resynchronization therapy (CRT) on the relationship between the cardiovascular and respiratory systems in heart failure subjects was examined for the first time. We hypothesized that alterations in cardio-respiratory interactions, after CRT implantation, quantified by signal complexity,
[...] Read more.
In this study, the effect of cardiac resynchronization therapy (CRT) on the relationship between the cardiovascular and respiratory systems in heart failure subjects was examined for the first time. We hypothesized that alterations in cardio-respiratory interactions, after CRT implantation, quantified by signal complexity, could be a marker of a favorable CRT response. Sample entropy and scaling exponents were calculated from synchronously recorded cardiac and respiratory signals 20 min in duration, collected in 47 heart failure patients at rest, before and 9 months after CRT implantation. Further, cross-sample entropy between these signals was calculated. After CRT, all patients had lower heart rate and CRT responders had reduced breathing frequency. Results revealed that higher cardiac rhythm complexity in CRT non-responders was associated with weak correlations of cardiac rhythm at baseline measurement over long scales and over short scales at follow-up recording. Unlike CRT responders, in non-responders, a significant difference in respiratory rhythm complexity between measurements could be consequence of divergent changes in correlation properties of the respiratory signal over short and long scales. Asynchrony between cardiac and respiratory rhythm increased significantly in CRT non-responders during follow-up. Quantification of complexity and synchrony between cardiac and respiratory signals shows significant associations between CRT success and stability of cardio-respiratory coupling.
Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
►
Show Figures
Open AccessArticle
Understanding the Nature of the Long-Range Memory Phenomenon in Socioeconomic Systems
Entropy 2021, 23(9), 1125; https://doi.org/10.3390/e23091125 - 29 Aug 2021
Abstract
In the face of the upcoming 30th anniversary of econophysics, we review our contributions and other related works on the modeling of the long-range memory phenomenon in physical, economic, and other social complex systems. Our group has shown that the long-range memory phenomenon
[...] Read more.
In the face of the upcoming 30th anniversary of econophysics, we review our contributions and other related works on the modeling of the long-range memory phenomenon in physical, economic, and other social complex systems. Our group has shown that the long-range memory phenomenon can be reproduced using various Markov processes, such as point processes, stochastic differential equations, and agent-based models—reproduced well enough to match other statistical properties of the financial markets, such as return and trading activity distributions and first-passage time distributions. Research has lead us to question whether the observed long-range memory is a result of the actual long-range memory process or just a consequence of the non-linearity of Markov processes. As our most recent result, we discuss the long-range memory of the order flow data in the financial markets and other social systems from the perspective of the fractional Lèvy stable motion. We test widely used long-range memory estimators on discrete fractional Lèvy stable motion represented by the auto-regressive fractionally integrated moving average (ARFIMA) sample series. Our newly obtained results seem to indicate that new estimators of self-similarity and long-range memory for analyzing systems with non-Gaussian distributions have to be developed.
Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Open AccessArticle
State Estimation of an Underwater Markov Chain Maneuvering Target Using Intelligent Computing
Entropy 2021, 23(9), 1124; https://doi.org/10.3390/e23091124 - 29 Aug 2021
Abstract
In this study, an application of deep learning-based neural computing is proposed for efficient real-time state estimation of the Markov chain underwater maneuvering object. The designed intelligent strategy is exploiting the strength of nonlinear autoregressive with an exogenous input (NARX) network model, which
[...] Read more.
In this study, an application of deep learning-based neural computing is proposed for efficient real-time state estimation of the Markov chain underwater maneuvering object. The designed intelligent strategy is exploiting the strength of nonlinear autoregressive with an exogenous input (NARX) network model, which has the capability for estimating the dynamics of the systems that follow the discrete-time Markov chain. Nonlinear Bayesian filtering techniques are often applied for underwater maneuvering state estimation applications by following state-space methodology. The robustness and precision of NARX neural network are efficiently investigated for accurate state prediction of the passive Markov chain highly maneuvering underwater target. A continuous coordinated turning trajectory of an underwater maneuvering object is modeled for analyzing the performance of the neural computing paradigm. State estimation modeling is developed in the context of bearings only tracking technology in which the efficiency of the NARX neural network is investigated for ideal and complex ocean environments. Real-time position and velocity of maneuvering object are computed for five different cases by varying standard deviations of white Gaussian measured noise. Sufficient Monte Carlo simulation results validate the competence of NARX neural computing over conventional generalized pseudo-Bayesian filtering algorithms like an interacting multiple model extended Kalman filter and an interacting multiple model unscented Kalman filter.
Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics II)
►▼
Show Figures

Figure 1
Open AccessArticle
Overview of Machine Learning Process Modelling
Entropy 2021, 23(9), 1123; https://doi.org/10.3390/e23091123 - 28 Aug 2021
Abstract
Much research has been conducted in the area of machine learning algorithms; however, the question of a general description of an artificial learner’s (empirical) performance has mainly remained unanswered. A general, restrictions-free theory on its performance has not been developed yet. In this
[...] Read more.
Much research has been conducted in the area of machine learning algorithms; however, the question of a general description of an artificial learner’s (empirical) performance has mainly remained unanswered. A general, restrictions-free theory on its performance has not been developed yet. In this study, we investigate which function most appropriately describes learning curves produced by several machine learning algorithms, and how well these curves can predict the future performance of an algorithm. Decision trees, neural networks, Naïve Bayes, and Support Vector Machines were applied to 130 datasets from publicly available repositories. Three different functions (power, logarithmic, and exponential) were fit to the measured outputs. Using rigorous statistical methods and two measures for the goodness-of-fit, the power law model proved to be the most appropriate model for describing the learning curve produced by the algorithms in terms of goodness-of-fit and prediction capabilities. The presented study, first of its kind in scale and rigour, provides results (and methods) that can be used to assess the performance of novel or existing artificial learners and forecast their `capacity to learn’ based on the amount of available or desired data.
Full article
(This article belongs to the Special Issue Information-Theoretic Data Mining)
Open AccessArticle
Computation of Kullback–Leibler Divergence in Bayesian Networks
Entropy 2021, 23(9), 1122; https://doi.org/10.3390/e23091122 - 28 Aug 2021
Abstract
Kullback–Leibler divergence is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation
[...] Read more.
Kullback–Leibler divergence is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation or as a measure of error when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian networks, a direct computation can be unfeasible. This paper considers the case of efficiently computing the Kullback–Leibler divergence of two probability distributions, each one of them coming from a different Bayesian network, which might have different structures. The paper is based on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache of operations with potentials in order to reuse past computations whenever they are necessary. The algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python is provided taking as basis pgmpy, a library for working with probabilistic graphical models.
Full article
(This article belongs to the Special Issue Bayesian Inference in Probabilistic Graphical Models)
Open AccessArticle
ECG Signal Classification Using Deep Learning Techniques Based on the PTB-XL Dataset
Entropy 2021, 23(9), 1121; https://doi.org/10.3390/e23091121 - 28 Aug 2021
Abstract
The analysis and processing of ECG signals are a key approach in the diagnosis of cardiovascular diseases. The main field of work in this area is classification, which is increasingly supported by machine learning-based algorithms. In this work, a deep neural network was
[...] Read more.
The analysis and processing of ECG signals are a key approach in the diagnosis of cardiovascular diseases. The main field of work in this area is classification, which is increasingly supported by machine learning-based algorithms. In this work, a deep neural network was developed for the automatic classification of primary ECG signals. The research was carried out on the data contained in a PTB-XL database. Three neural network architectures were proposed: the first based on the convolutional network, the second on SincNet, and the third on the convolutional network, but with additional entropy-based features. The dataset was divided into training, validation, and test sets in proportions of 70%, 15%, and 15%, respectively. The studies were conducted for 2, 5, and 20 classes of disease entities. The convolutional network with entropy features obtained the best classification result. The convolutional network without entropy-based features obtained a slightly less successful result, but had the highest computational efficiency, due to the significantly lower number of neurons.
Full article
(This article belongs to the Special Issue Advances in Computer Recognition, Image Processing and Communications, Selected Papers from CORES 2021 and IP&C 2021)
Open AccessArticle
Logic Programming with Post-Quantum Cryptographic Primitives for Smart Contract on Quantum-Secured Blockchain
Entropy 2021, 23(9), 1120; https://doi.org/10.3390/e23091120 - 28 Aug 2021
Abstract
This paper investigates the usage of logic and logic programming in the design of smart contracts. Our starting point is the logic-based programming language for smart contracts used in a recently proposed framework of quantum-secured blockchain, called Logicontract (LC). We then extend the
[...] Read more.
This paper investigates the usage of logic and logic programming in the design of smart contracts. Our starting point is the logic-based programming language for smart contracts used in a recently proposed framework of quantum-secured blockchain, called Logicontract (LC). We then extend the logic used in LC by answer set programming (ASP), a modern approach to declarative logic programming. Using ASP enables us to write various interesting smart contracts, such as conditional payment, commitment, multi-party lottery and legal service. A striking feature of our ASP implementation proposal is that it involves post-quantum cryptographic primitives, such as the lattice-based public key encryption and signature. The adoption of the post-quantum cryptographic signature overcomes a specific limitation of LC in which the unconditionally secure signature, despite its strength, offers limited protection for users of the same node.
Full article
(This article belongs to the Collection Quantum Information)
Open AccessArticle
Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making
Entropy 2021, 23(9), 1119; https://doi.org/10.3390/e23091119 - 28 Aug 2021
Abstract
Much attention has been paid to construct an applicable knowledge measure or uncertainty measure for Atanassov’s intuitionistic fuzzy set (AIFS). However, many of these measures were developed from intuitionistic fuzzy entropy, which cannot really reflect the knowledge amount associated with an AIFS well.
[...] Read more.
Much attention has been paid to construct an applicable knowledge measure or uncertainty measure for Atanassov’s intuitionistic fuzzy set (AIFS). However, many of these measures were developed from intuitionistic fuzzy entropy, which cannot really reflect the knowledge amount associated with an AIFS well. Some knowledge measures were constructed based on the distinction between an AIFS and its complementary set, which may lead to information loss in decision making. In this paper, knowledge amount of an AIFS is quantified by calculating the distance from an AIFS to the AIFS with maximum uncertainty. Axiomatic properties for the definition of knowledge measure are extended to a more general level. Then the new knowledge measure is developed based on an intuitionistic fuzzy distance measure. The properties of the proposed distance-based knowledge measure are investigated based on mathematical analysis and numerical examples. The proposed knowledge measure is finally applied to solve the multi-attribute group decision-making (MAGDM) problem with intuitionistic fuzzy information. The new MAGDM method is used to evaluate the threat level of malicious code. Experimental results in malicious code threat evaluation demonstrate the effectiveness and validity of proposed method.
Full article
(This article belongs to the Special Issue Recent Progress of Deng Entropy)
Open AccessArticle
A Study on Consumers’ Visual Image Evaluation of Wrist Wearables
by
and
Entropy 2021, 23(9), 1118; https://doi.org/10.3390/e23091118 - 27 Aug 2021
Abstract
This study aimed to investigate consumers’ visual image evaluation of wrist wearables based on Kansei engineering. A total of 8 representative samples were screened from 99 samples using the multidimensional scaling (MDS) method. Five groups of adjectives were identified to allow participants to
[...] Read more.
This study aimed to investigate consumers’ visual image evaluation of wrist wearables based on Kansei engineering. A total of 8 representative samples were screened from 99 samples using the multidimensional scaling (MDS) method. Five groups of adjectives were identified to allow participants to express their visual impressions of wrist wearable devices through a questionnaire survey and factor analysis. The evaluation of eight samples using the five groups of adjectives was analyzed utilizing the triangle fuzzy theory. The results showed a relatively different evaluation of the eight samples in the groups of “fashionable and individual” and “rational and decent”, but little distinction in the groups of “practical and durable”, “modern and smart” and “convenient and multiple”. Furthermore, wrist wearables with a shape close to a traditional watch dial (round), with a bezel and mechanical buttons (moderate complexity) and asymmetric forms received a higher evaluation. The acceptance of square- and elliptical-shaped wrist wearables was relatively low. Among the square- and rectangular-shaped wrist wearables, the greater the curvature of the chamfer, the higher the acceptance. Apparent contrast between the color of the screen and the casing had good acceptance. The influence of display size on consumer evaluations was relatively small. Similar results were obtained in the evaluation of preferences and willingness to purchase. The results of this study objectively and effectively reflect consumers’ evaluation and potential demand for the visual images of wrist wearables and provide a reference for designers and industry professionals.
Full article
Open AccessArticle
Dimensionality Reduction of SPD Data Based on Riemannian Manifold Tangent Spaces and Isometry
Entropy 2021, 23(9), 1117; https://doi.org/10.3390/e23091117 - 27 Aug 2021
Abstract
Symmetric positive definite (SPD) data have become a hot topic in machine learning. Instead of a linear Euclidean space, SPD data generally lie on a nonlinear Riemannian manifold. To get over the problems caused by the high data dimensionality, dimensionality reduction (DR) is
[...] Read more.
Symmetric positive definite (SPD) data have become a hot topic in machine learning. Instead of a linear Euclidean space, SPD data generally lie on a nonlinear Riemannian manifold. To get over the problems caused by the high data dimensionality, dimensionality reduction (DR) is a key subject for SPD data, where bilinear transformation plays a vital role. Because linear operations are not supported in nonlinear spaces such as Riemannian manifolds, directly performing Euclidean DR methods on SPD matrices is inadequate and difficult in complex models and optimization. An SPD data DR method based on Riemannian manifold tangent spaces and global isometry (RMTSISOM-SPDDR) is proposed in this research. The main contributions are listed: (1) Any Riemannian manifold tangent space is a Hilbert space isomorphic to a Euclidean space. Particularly for SPD manifolds, tangent spaces consist of symmetric matrices, which can greatly preserve the form and attributes of original SPD data. For this reason, RMTSISOM-SPDDR transfers the bilinear transformation from manifolds to tangent spaces. (2) By log transformation, original SPD data are mapped to the tangent space at the identity matrix under the affine invariant Riemannian metric (AIRM). In this way, the geodesic distance between original data and the identity matrix is equal to the Euclidean distance between corresponding tangent vector and the origin. (3) The bilinear transformation is further determined by the isometric criterion guaranteeing the geodesic distance on high-dimensional SPD manifold as close as possible to the Euclidean distance in the tangent space of low-dimensional SPD manifold. Then, we use it for the DR of original SPD data. Experiments on five commonly used datasets show that RMTSISOM-SPDDR is superior to five advanced SPD data DR algorithms.
Full article
(This article belongs to the Section Signal and Data Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Towards an Efficient and Exact Algorithm for Dynamic Dedicated Path Protection
Entropy 2021, 23(9), 1116; https://doi.org/10.3390/e23091116 - 27 Aug 2021
Abstract
We present a novel algorithm for dynamic routing with dedicated path protection which, as the presented simulation results suggest, can be efficient and exact. We present the algorithm in the setting of optical networks, but it should be applicable to other networks, where
[...] Read more.
We present a novel algorithm for dynamic routing with dedicated path protection which, as the presented simulation results suggest, can be efficient and exact. We present the algorithm in the setting of optical networks, but it should be applicable to other networks, where services have to be protected, and where the network resources are finite and discrete, e.g., wireless radio or networks capable of advance resource reservation. To the best of our knowledge, we are the first to propose an algorithm for this long-standing fundamental problem, which can be efficient and exact, as suggested by simulation results. The algorithm can be efficient because it can solve large problems, and it can be exact because its results are optimal, as demonstrated and corroborated by simulations. We offer a worst-case analysis to argue that the search space is polynomially upper bounded. Network operations, management, and control require efficient and exact algorithms, especially now, when greater emphasis is placed on network performance, reliability, softwarization, agility, and return on investment. The proposed algorithm uses our generic Dijkstra algorithm on a search graph generated “on-the-fly” based on the input graph. We corroborated the optimality of the results of the proposed algorithm with brute-force enumeration for networks up to 15 nodes large. We present the extensive simulation results of dedicated-path protection with signal modulation constraints for elastic optical networks of 25, 50, and 100 nodes, and with 160, 320, and 640 spectrum units. We also compare the bandwidth blocking probability with the commonly-used edge-exclusion algorithm. We had 48,600 simulation runs with about 41 million searches.
Full article
(This article belongs to the Special Issue Advances in Computer Recognition, Image Processing and Communications, Selected Papers from CORES 2021 and IP&C 2021)
►▼
Show Figures

Figure 1
Open AccessArticle
A Drive towards Thermodynamic Efficiency for Dissipative Structures in Chemical Reaction Networks
Entropy 2021, 23(9), 1115; https://doi.org/10.3390/e23091115 - 27 Aug 2021
Abstract
Dissipative accounts of structure formation show that the self-organisation of complex structures is thermodynamically favoured, whenever these structures dissipate free energy that could not be accessed otherwise. These structures therefore open transition channels for the state of the universe to move from a
[...] Read more.
Dissipative accounts of structure formation show that the self-organisation of complex structures is thermodynamically favoured, whenever these structures dissipate free energy that could not be accessed otherwise. These structures therefore open transition channels for the state of the universe to move from a frustrated, metastable state to another metastable state of higher entropy. However, these accounts apply as well to relatively simple, dissipative systems, such as convection cells, hurricanes, candle flames, lightning strikes, or mechanical cracks, as they do to complex biological systems. Conversely, interesting computational properties—that characterize complex biological systems, such as efficient, predictive representations of environmental dynamics—can be linked to the thermodynamic efficiency of underlying physical processes. However, the potential mechanisms that underwrite the selection of dissipative structures with thermodynamically efficient subprocesses is not completely understood. We address these mechanisms by explaining how bifurcation-based, work-harvesting processes—required to sustain complex dissipative structures—might be driven towards thermodynamic efficiency. We first demonstrate a simple mechanism that leads to self-selection of efficient dissipative structures in a stochastic chemical reaction network, when the dissipated driving chemical potential difference is decreased. We then discuss how such a drive can emerge naturally in a hierarchy of self-similar dissipative structures, each feeding on the dissipative structures of a previous level, when moving away from the initial, driving disequilibrium.
Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
►▼
Show Figures

Figure 1
Open AccessArticle
A Deep Neural Network Method for Arterial Blood Flow Profile Reconstruction
Entropy 2021, 23(9), 1114; https://doi.org/10.3390/e23091114 - 27 Aug 2021
Abstract
Arterial stenosis will reduce the blood flow to various organs or tissues, causing cardiovascular diseases. Although there are mature diagnostic techniques in clinical practice, they are not suitable for early cardiovascular disease prediction and monitoring due to their high cost and complex operation.
[...] Read more.
Arterial stenosis will reduce the blood flow to various organs or tissues, causing cardiovascular diseases. Although there are mature diagnostic techniques in clinical practice, they are not suitable for early cardiovascular disease prediction and monitoring due to their high cost and complex operation. In this paper, we studied the electromagnetic effect of arterial blood flow and proposed a method based on the deep neural network for arterial blood flow profile reconstruction. The potential difference and weight matrix are used as inputs to the method, and its output is an estimate of the internal blood flow velocity distribution for arterial blood flow profile reconstruction. Firstly, the weight matrix is input into the convolutional auto-encode (CAE) network to extract its features. Then, the weight matrix features and potential difference are combined to obtain the features of the blood velocity distribution. Finally, the velocity features are reconstructed into blood flow velocity distribution by a convolution neural network (CNN). All data sets are obtained from a model of the carotid artery with different rates of stenosis in a uniform magnetic field by COMSOL. The results show that the average root mean square error of the reconstruction results obtained by the proposed method is 0.0333, and the average correlation coefficient is 0.9721, which is better than the corresponding indicators of the Tikhonov, back propagation (BP) and CNN methods. The simulation results show that the proposed method can achieve high accuracy in blood flow profile reconstruction and is of great significance for the early diagnosis of arterial stenosis and other vessel diseases.
Full article
(This article belongs to the Section Entropy and Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Coal and Rock Hardness Identification Based on EEMD and Multi-Scale Permutation Entropy
Entropy 2021, 23(9), 1113; https://doi.org/10.3390/e23091113 - 27 Aug 2021
Abstract
This study offers an efficient hardness identification approach to address the problem of poor real-time performance and accuracy in coal and rock hardness detection. To begin, Ensemble Empirical Mode Decomposition (EEMD) was performed on the current signal of the cutting motor to obtain
[...] Read more.
This study offers an efficient hardness identification approach to address the problem of poor real-time performance and accuracy in coal and rock hardness detection. To begin, Ensemble Empirical Mode Decomposition (EEMD) was performed on the current signal of the cutting motor to obtain a number of Intrinsic Mode Functions (IMFs). Further, the target signal was selected among the IMFs to reconstruct the current signal according to the energy density and correlation coefficient criteria. After that, the Multi-scale Permutation Entropy (MPE) of the reconstructed signal was trained by the Adaboost improved Back Propagation (BP) neural network, in order to establish the hardness recognition model. Finally, the cutting arm’s swing speed and the cutting head’s rotation speed were adjusted based on the coal and rock hardness. The simulation results indicated that using the energy density and correlation criterion to reconstruct the signal can successfully filter out noise interference. Compared to the BP model, the relative root-mean-square error of the Adaboost-BP model decreased by 0.0633, and the prediction results were more accurate. Additionally, the speed control strategy based on coal and rock hardness can ensure the efficient cutting of the roadheader.
Full article
(This article belongs to the Section Signal and Data Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Investigation of Cyber-Security and Cyber-Crimes in Oil and Gas Sectors Using the Innovative Structures of Complex Intuitionistic Fuzzy Relations
by
, , , , and
Entropy 2021, 23(9), 1112; https://doi.org/10.3390/e23091112 - 27 Aug 2021
Abstract
Recently, there has been enormous development due to advancements in technology. Industries and enterprises are moving towards a digital system, and the oil and gas industries are no exception. There are several threats and risks in digital systems, which are controlled through cyber-security.
[...] Read more.
Recently, there has been enormous development due to advancements in technology. Industries and enterprises are moving towards a digital system, and the oil and gas industries are no exception. There are several threats and risks in digital systems, which are controlled through cyber-security. For the first time in the theory of fuzzy sets, this research analyzes the relationships between cyber-security and cyber-crimes in the oil and gas sectors. The novel concepts of complex intuitionistic fuzzy relations (CIFRs) are introduced. Moreover, the types of CIFRs are defined and their properties are discussed. In addition, an application is presented that uses the Hasse diagram to make a decision regarding the most suitable cyber-security techniques to implement in an industry. Furthermore, the omnipotence of the proposed methods is explained by a comparative study.
Full article
(This article belongs to the Special Issue Fault Diagnosis Method Based on Information Theoretic: From Theory to Applications)
Open AccessArticle
Improved YOLO Based Detection Algorithm for Floating Debris in Waterway
Entropy 2021, 23(9), 1111; https://doi.org/10.3390/e23091111 - 27 Aug 2021
Abstract
Various floating debris in the waterway can be used as one kind of visual index to measure the water quality. The traditional image processing method is difficult to meet the requirements of real-time monitoring of floating debris in the waterway due to the
[...] Read more.
Various floating debris in the waterway can be used as one kind of visual index to measure the water quality. The traditional image processing method is difficult to meet the requirements of real-time monitoring of floating debris in the waterway due to the complexity of the environment, such as reflection of sunlight, obstacles of water plants, a large difference between the near and far target scale, and so on. To address these issues, an improved YOLOv5s (FMA-YOLOv5s) algorithm by adding a feature map attention (FMA) layer at the end of the backbone is proposed. The mosaic data augmentation is applied to enhance the detection effect of small targets in training. A data expansion method is introduced to expand the training dataset from 1920 to 4800, which fuses the labeled target objects extracted from the original training dataset and the background images of the clean river surface in the actual scene. The comparisons of accuracy and rapidity of six models of this algorithm are completed. The experiment proves that it meets the standards of real-time object detection.
Full article
(This article belongs to the Special Issue Towards Image/Video Perception with Entropy-Aware Features and Its Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Robust Stabilization and Synchronization of a Novel Chaotic System with Input Saturation Constraints
by
, , , , , , and
Entropy 2021, 23(9), 1110; https://doi.org/10.3390/e23091110 - 27 Aug 2021
Abstract
In this paper, the robust stabilization and synchronization of a novel chaotic system are presented. First, a novel chaotic system is presented in which this system is realized by implementing a sigmoidal function to generate the chaotic behavior of this analyzed system. A
[...] Read more.
In this paper, the robust stabilization and synchronization of a novel chaotic system are presented. First, a novel chaotic system is presented in which this system is realized by implementing a sigmoidal function to generate the chaotic behavior of this analyzed system. A bifurcation analysis is provided in which by varying three parameters of this chaotic system, the respective bifurcations plots are generated and evinced to analyze and verify when this system is in the stability region or in a chaotic regimen. Then, a robust controller is designed to drive the system variables from the chaotic regimen to stability so that these variables reach the equilibrium point in finite time. The robust controller is obtained by selecting an appropriate robust control Lyapunov function to obtain the resulting control law. For synchronization purposes, the novel chaotic system designed in this study is used as a drive and response system, considering that the error variable is implemented in a robust control Lyapunov function to drive this error variable to zero in finite time. In the control law design for stabilization and synchronization purposes, an extra state is provided to ensure that the saturated input sector condition must be mathematically tractable. A numerical experiment and simulation results are evinced, along with the respective discussion and conclusion.
Full article
(This article belongs to the Special Issue Complex Dynamic System Modelling, Identification and Control)
►▼
Show Figures

Figure 1
Open AccessArticle
Application of the Free Tangent Law in Quantification of Household Satisfaction from Durable Consumer Goods
by
and
Entropy 2021, 23(9), 1109; https://doi.org/10.3390/e23091109 - 26 Aug 2021
Abstract
This paper presents a postulate for a new approach in the measurement of households’ satisfaction from durable consumer goods, based on a modified inflation expectation measurement method used in survey research. The authors examine the application of a three-step qualitative evaluation, followed by
[...] Read more.
This paper presents a postulate for a new approach in the measurement of households’ satisfaction from durable consumer goods, based on a modified inflation expectation measurement method used in survey research. The authors examine the application of a three-step qualitative evaluation, followed by the quantification of responses using a modified Carlson and Parkin method adopted in the context of the free tangent law.
Full article
(This article belongs to the Section Complexity)
Open AccessArticle
The Ring-LWE Problem in Lattice-Based Cryptography: The Case of Twisted Embeddings
Entropy 2021, 23(9), 1108; https://doi.org/10.3390/e23091108 - 26 Aug 2021
Abstract
Several works have characterized weak instances of the Ring-LWE problem by exploring vulnerabilities arising from the use of algebraic structures. Although these weak instances are not addressed by worst-case hardness theorems, enabling other ring instantiations enlarges the scope of possible applications and favors
[...] Read more.
Several works have characterized weak instances of the Ring-LWE problem by exploring vulnerabilities arising from the use of algebraic structures. Although these weak instances are not addressed by worst-case hardness theorems, enabling other ring instantiations enlarges the scope of possible applications and favors the diversification of security assumptions. In this work, we extend the Ring-LWE problem in lattice-based cryptography to include algebraic lattices, realized through twisted embeddings. We define the class of problems Twisted Ring-LWE, which replaces the canonical embedding by an extended form. By doing so, we allow the Ring-LWE problem to be used over maximal real subfields of cyclotomic number fields. We prove that Twisted Ring-LWE is secure by providing a security reduction from Ring-LWE to Twisted Ring-LWE in both search and decision forms. It is also shown that the twist factor does not affect the asymptotic approximation factors in the worst-case to average-case reductions. Thus, Twisted Ring-LWE maintains the consolidated hardness guarantee of Ring-LWE and increases the existing scope of algebraic lattices that can be considered for cryptographic applications. Additionally, we expand on the results of Ducas and Durmus (Public-Key Cryptography, 2012) on spherical Gaussian distributions to the proposed class of lattices under certain restrictions. As a result, sampling from a spherical Gaussian distribution can be done directly in the respective number field while maintaining its format and standard deviation when seen in via twisted embeddings.
Full article
(This article belongs to the Special Issue Applications of Codes and Lattices in Cryptography and Wireless Communications)
Open AccessArticle
Quantum and Classical Ergotropy from Relative Entropies
by
and
Entropy 2021, 23(9), 1107; https://doi.org/10.3390/e23091107 - 25 Aug 2021
Abstract
The quantum ergotropy quantifies the maximal amount of work that can be extracted from a quantum state without changing its entropy. Given that the ergotropy can be expressed as the difference of quantum and classical relative entropies of the quantum state with respect
[...] Read more.
The quantum ergotropy quantifies the maximal amount of work that can be extracted from a quantum state without changing its entropy. Given that the ergotropy can be expressed as the difference of quantum and classical relative entropies of the quantum state with respect to the thermal state, we define the classical ergotropy, which quantifies how much work can be extracted from distributions that are inhomogeneous on the energy surfaces. A unified approach to treat both quantum as well as classical scenarios is provided by geometric quantum mechanics, for which we define the geometric relative entropy. The analysis is concluded with an application of the conceptual insight to conditional thermal states, and the correspondingly tightened maximum work theorem.
Full article
(This article belongs to the Special Issue Thermodynamics of Quantum Information)
Journal Menu
► ▼ Journal Menu-
- Entropy Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topics Board
- Instructions for Authors
- Special Issues
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor's Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 23 (2021)
- Vol. 22 (2020)
- Vol. 21 (2019)
- Vol. 20 (2018)
- Vol. 19 (2017)
- Vol. 18 (2016)
- Vol. 17 (2015)
- Vol. 16 (2014)
- Vol. 15 (2013)
- Vol. 14 (2012)
- Vol. 13 (2011)
- Vol. 12 (2010)
- Vol. 11 (2009)
- Vol. 10 (2008)
- Vol. 9 (2007)
- Vol. 8 (2006)
- Vol. 7 (2005)
- Vol. 6 (2004)
- Vol. 5 (2003)
- Vol. 4 (2002)
- Vol. 3 (2001)
- Vol. 2 (2000)
- Vol. 1 (1999)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Life, Entropy
Stochastic Models and Experiments in Ecology and Biology
Editor-in-Chief: Sandro AzaeleDeadline: 20 September 2021
Topic in
Energies, JNE, Entropy, Sci
Nuclear Energy Systems
Editors-in-Chief: Dan Gabriel Cacuci, Michael M.R. Williams, Andrew Buchan, Ruixian FangDeadline: 30 June 2022
Topic in
Applied Sciences, Biomedicines, Entropy, JPM
eHealth and mHealth: Challenges and Prospects
Editor-in-Chief: Anton CivitDeadline: 31 December 2022
Topic in
Applied Sciences, Entropy, JSAN, Sustainability
Artificial Intelligence and Sustainable Energy Systems
Editor-in-Chief: Luis Hernández-CallejoDeadline: 31 January 2023
Conferences
Special Issues
Special Issue in
Entropy
Symbolic Entropy Analysis and Its Applications II
Guest Editor: Raúl AlcarazDeadline: 30 August 2021
Special Issue in
Entropy
Thermodynamics and Entropy for Self-Assembly and Self-Organization
Guest Editor: Jordi FaraudoDeadline: 15 September 2021
Special Issue in
Entropy
Entropy: The Scientific Tool of the 21st Century
Guest Editor: José A. Tenreiro MachadoDeadline: 30 September 2021
Special Issue in
Entropy
The Second Law and Asymmetry of Time
Guest Editors: Alexander Klimenko, Debra BernhardtDeadline: 15 October 2021
Topical Collections
Topical Collection in
Entropy
Locality and Non-Locality, and Their Relation to Time
Collection Editor: Gregg Jaeger
Topical Collection in
Entropy
Feature Papers in Information Theory
Collection Editors: Raúl Alcaraz, Luca Faes, Leandro Pardo, Boris Ryabko




