Next Issue
Volume 23, December
Previous Issue
Volume 23, October
 
 

Entropy, Volume 23, Issue 11 (November 2021) – 185 articles

Cover Story (view full-size image): Different arguments have led to supposing that the deep origin of phase transitions has to be identified with suitable topological changes of potential related to submanifolds of onfiguration space of a physical system. An important step forward for this approach was achieved with two theorems stating that, for a wide class of physical systems, phase transitions should necessarily stem from topological changes of energy level submanifolds of the phase space. However, the sufficiency conditions are still a wide-open question. In this study, a first important step forward was performed in this direction; in fact, a differential equation was worked out which describes how entropy varies as a function of total energy.View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Optical Channel Selection Avoiding DIPP in DSB-RFoF Fronthaul Interface
Entropy 2021, 23(11), 1554; https://doi.org/10.3390/e23111554 - 22 Nov 2021
Cited by 1 | Viewed by 625
Abstract
The paper presents a method of selecting an optical channel for transporting the double-sideband radio-frequency-over-fiber (DSB-RFoF) radio signal over the optical fronthaul path, avoiding the dispersion-induced power penalty (DIPP) phenomenon. The presented method complements the possibilities of a short-range optical network working in [...] Read more.
The paper presents a method of selecting an optical channel for transporting the double-sideband radio-frequency-over-fiber (DSB-RFoF) radio signal over the optical fronthaul path, avoiding the dispersion-induced power penalty (DIPP) phenomenon. The presented method complements the possibilities of a short-range optical network working in the flexible dense wavelength division multiplexing (DWDM) format, where chromatic dispersion compensation is not applied. As part of the study, calculations were made that indicate the limitations of the proposed method and allow for the development of an algorithm for effective optical channel selection in the presence of the DIPP phenomenon experienced in the optical link working in the intensity modulation–direct detection (IM-DD) technique. Calculations were made for three types of single-mode optical fibers and for selected microwave radio carriers that are used in current systems or will be used in next-generation wireless communication systems. In order to verify the calculations and theoretical considerations, a computer simulation was performed for two types of optical fibers and for two selected radio carriers. In the modulated radio signal, the cyclic-prefix orthogonal frequency division multiplexing (CP-OFDM) format and the 5G numerology were used. Full article
Show Figures

Figure 1

Article
A Comparative Study of Functional Connectivity Measures for Brain Network Analysis in the Context of AD Detection with EEG
Entropy 2021, 23(11), 1553; https://doi.org/10.3390/e23111553 - 22 Nov 2021
Cited by 4 | Viewed by 800
Abstract
This work addresses brain network analysis considering different clinical severity stages of cognitive dysfunction, based on resting-state electroencephalography (EEG). We use a cohort acquired in real-life clinical conditions, which contains EEG data of subjective cognitive impairment (SCI) patients, mild cognitive impairment (MCI) patients, [...] Read more.
This work addresses brain network analysis considering different clinical severity stages of cognitive dysfunction, based on resting-state electroencephalography (EEG). We use a cohort acquired in real-life clinical conditions, which contains EEG data of subjective cognitive impairment (SCI) patients, mild cognitive impairment (MCI) patients, and Alzheimer’s disease (AD) patients. We propose to exploit an epoch-based entropy measure to quantify the connectivity links in the networks. This entropy measure relies on a refined statistical modeling of EEG signals with Hidden Markov Models, which allow a better estimation of the spatiotemporal characteristics of EEG signals. We also propose to conduct a comparative study by considering three other measures largely used in the literature: phase lag index, coherence, and mutual information. We calculated such measures at different frequency bands and computed different local graph parameters considering different proportional threshold values for a binary network analysis. After applying a feature selection procedure to determine the most relevant features for classification performance with a linear Support Vector Machine algorithm, our study demonstrates the effectiveness of the statistical entropy measure for analyzing the brain network in patients with different stages of cognitive dysfunction. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

Article
The Downlink Performance for Cell-Free Massive MIMO with Instantaneous CSI in Slowly Time-Varying Channels
Entropy 2021, 23(11), 1552; https://doi.org/10.3390/e23111552 - 22 Nov 2021
Cited by 1 | Viewed by 511
Abstract
In centralized massive multiple-input multiple-output (MIMO) systems, the channel hardening phenomenon can occur, in which the channel behaves as almost fully deterministic as the number of antennas increases. Nevertheless, in a cell-free massive MIMO system, the channel is less deterministic. In this paper, [...] Read more.
In centralized massive multiple-input multiple-output (MIMO) systems, the channel hardening phenomenon can occur, in which the channel behaves as almost fully deterministic as the number of antennas increases. Nevertheless, in a cell-free massive MIMO system, the channel is less deterministic. In this paper, we propose using instantaneous channel state information (CSI) instead of statistical CSI to obtain the power control coefficient in cell-free massive MIMO. Access points (APs) and user equipment (UE) have sufficient time to obtain instantaneous CSI in a slowly time-varying channel environment. We derive the achievable downlink rate under instantaneous CSI for frequency division duplex (FDD) cell-free massive MIMO systems and apply the results to the power control coefficients. For FDD systems, quantized channel coefficients are proposed to reduce feedback overhead. The simulation results show that the spectral efficiency performance when using instantaneous CSI is approximately three times higher than that achieved using statistical CSI. Full article
Show Figures

Figure 1

Article
Performances of Transcritical Power Cycles with CO2-Based Mixtures for the Waste Heat Recovery of ICE
Entropy 2021, 23(11), 1551; https://doi.org/10.3390/e23111551 - 21 Nov 2021
Cited by 1 | Viewed by 524
Abstract
In the waste heat recovery of the internal combustion engine (ICE), the transcritical CO2 power cycle still faces the high operation pressure and difficulty in condensation. To overcome these challenges, CO2 is mixed with organic fluids to form zeotropic mixtures. Thus, [...] Read more.
In the waste heat recovery of the internal combustion engine (ICE), the transcritical CO2 power cycle still faces the high operation pressure and difficulty in condensation. To overcome these challenges, CO2 is mixed with organic fluids to form zeotropic mixtures. Thus, in this work, five organic fluids, namely R290, R600a, R600, R601a, and R601, are mixed with CO2. Mixture performance in the waste heat recovery of ICE is evaluated, based on two transcritical power cycles, namely the recuperative cycle and split cycle. The results show that the split cycle always has better performance than the recuperative cycle. Under design conditions, CO2/R290(0.3/0.7) has the best performance in the split cycle. The corresponding net work and cycle efficiency are respectively 21.05 kW and 20.44%. Furthermore, effects of key parameters such as turbine inlet temperature, turbine inlet pressure, and split ratio on the cycle performance are studied. With the increase of turbine inlet temperature, the net works of the recuperative cycle and split cycle firstly increase and then decrease. There exist peak values of net work in both cycles. Meanwhile, the net work of the split cycle firstly increases and then decreases with the increase of the split ratio. Thereafter, with the target of maximizing net work, these key parameters are optimized at different mass fractions of CO2. The optimization results show that CO2/R600 obtains the highest net work of 27.43 kW at the CO2 mass fraction 0.9 in the split cycle. Full article
(This article belongs to the Special Issue Supercritical Fluids for Thermal Energy Applications)
Show Figures

Figure 1

Article
An Improved K-Means Algorithm Based on Evidence Distance
Entropy 2021, 23(11), 1550; https://doi.org/10.3390/e23111550 - 21 Nov 2021
Cited by 1 | Viewed by 728
Abstract
The main influencing factors of the clustering effect of the k-means algorithm are the selection of the initial clustering center and the distance measurement between the sample points. The traditional k-mean algorithm uses Euclidean distance to measure the distance between sample points, thus [...] Read more.
The main influencing factors of the clustering effect of the k-means algorithm are the selection of the initial clustering center and the distance measurement between the sample points. The traditional k-mean algorithm uses Euclidean distance to measure the distance between sample points, thus it suffers from low differentiation of attributes between sample points and is prone to local optimal solutions. For this feature, this paper proposes an improved k-means algorithm based on evidence distance. Firstly, the attribute values of sample points are modelled as the basic probability assignment (BPA) of sample points. Then, the traditional Euclidean distance is replaced by the evidence distance for measuring the distance between sample points, and finally k-means clustering is carried out using UCI data. Experimental comparisons are made with the traditional k-means algorithm, the k-means algorithm based on the aggregation distance parameter, and the Gaussian mixture model. The experimental results show that the improved k-means algorithm based on evidence distance proposed in this paper has a better clustering effect and the convergence of the algorithm is also better. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing)
Show Figures

Figure 1

Article
Age of Information of Parallel Server Systems with Energy Harvesting
Entropy 2021, 23(11), 1549; https://doi.org/10.3390/e23111549 - 21 Nov 2021
Viewed by 392
Abstract
Motivated by current communication networks in which users can choose different transmission channels to operate and also by the recent growth of renewable energy sources, we study the average Age of Information of a status update system that is formed by two parallel [...] Read more.
Motivated by current communication networks in which users can choose different transmission channels to operate and also by the recent growth of renewable energy sources, we study the average Age of Information of a status update system that is formed by two parallel homogeneous servers and such that there is an energy source that feeds the system following a random process. An update, after getting service, is delivered to the monitor if there is energy in a battery. However, if the battery is empty, the status update is lost. We allow preemption of updates in service and we assume Poisson generation times of status updates and exponential service times. We show that the average Age of Information can be characterized by solving a system with eight linear equations. Then, we show that, when the arrival rate to both servers is large, the average Age of Information is one divided by the sum of the service rates of the servers. We also perform a numerical analysis to compare the performance of our model with that of a single server with energy harvesting and to study in detail the aforementioned convergence result. Full article
(This article belongs to the Special Issue Age of Information: Concept, Metric and Tool for Network Control)
Show Figures

Figure 1

Article
Geometric Analysis of a System with Chemical Interactions
Entropy 2021, 23(11), 1548; https://doi.org/10.3390/e23111548 - 21 Nov 2021
Viewed by 414
Abstract
In this paper, we present some initial results aimed at defining a framework for the analysis of thermodynamic systems with additional restrictions imposed on the intensive parameters. Specifically, for the case of chemical reactions, we considered the states of constant affinity that form [...] Read more.
In this paper, we present some initial results aimed at defining a framework for the analysis of thermodynamic systems with additional restrictions imposed on the intensive parameters. Specifically, for the case of chemical reactions, we considered the states of constant affinity that form isoffine submanifolds of the thermodynamic phase space. Wer discuss the problem of extending the previously obtained stability conditions to the considered class of systems. Full article
(This article belongs to the Special Issue Geometric Structure of Thermodynamics: Theory and Applications)
Article
Hierarchical Classification of Event-Related Potentials for the Recognition of Gender Differences in the Attention Task
Entropy 2021, 23(11), 1547; https://doi.org/10.3390/e23111547 - 20 Nov 2021
Viewed by 711
Abstract
Research on the functioning of human cognition has been a crucial problem studied for years. Electroencephalography (EEG) classification methods may serve as a precious tool for understanding the temporal dynamics of human brain activity, and the purpose of such an approach is to [...] Read more.
Research on the functioning of human cognition has been a crucial problem studied for years. Electroencephalography (EEG) classification methods may serve as a precious tool for understanding the temporal dynamics of human brain activity, and the purpose of such an approach is to increase the statistical power of the differences between conditions that are too weak to be detected using standard EEG methods. Following that line of research, in this paper, we focus on recognizing gender differences in the functioning of the human brain in the attention task. For that purpose, we gathered, analyzed, and finally classified event-related potentials (ERPs). We propose a hierarchical approach, in which the electrophysiological signal preprocessing is combined with the classification method, enriched with a segmentation step, which creates a full line of electrophysiological signal classification during an attention task. This approach allowed us to detect differences between men and women in the P3 waveform, an ERP component related to attention, which were not observed using standard ERP analysis. The results provide evidence for the high effectiveness of the proposed method, which outperformed a traditional statistical analysis approach. This is a step towards understanding neuronal differences between men’s and women’s brains during cognition, aiming to reduce the misdiagnosis and adverse side effects in underrepresented women groups in health and biomedical research. Full article
Show Figures

Figure 1

Article
Winsorization for Robust Bayesian Neural Networks
Entropy 2021, 23(11), 1546; https://doi.org/10.3390/e23111546 - 20 Nov 2021
Cited by 2 | Viewed by 615
Abstract
With the advent of big data and the popularity of black-box deep learning methods, it is imperative to address the robustness of neural networks to noise and outliers. We propose the use of Winsorization to recover model performances when the data may have [...] Read more.
With the advent of big data and the popularity of black-box deep learning methods, it is imperative to address the robustness of neural networks to noise and outliers. We propose the use of Winsorization to recover model performances when the data may have outliers and other aberrant observations. We provide a comparative analysis of several probabilistic artificial intelligence and machine learning techniques for supervised learning case studies. Broadly, Winsorization is a versatile technique for accounting for outliers in data. However, different probabilistic machine learning techniques have different levels of efficiency when used on outlier-prone data, with or without Winsorization. We notice that Gaussian processes are extremely vulnerable to outliers, while deep learning techniques in general are more robust. Full article
(This article belongs to the Special Issue Probabilistic Methods for Deep Learning)
Show Figures

Figure 1

Article
Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
Entropy 2021, 23(11), 1545; https://doi.org/10.3390/e23111545 - 20 Nov 2021
Cited by 3 | Viewed by 730
Abstract
Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out that [...] Read more.
Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out that the hierarchical structure of DGP well suited modeling the multi-fidelity regression, in which one is provided sparse observations with high precision and plenty of low fidelity observations. We propose the conditional DGP model in which the latent GPs are directly supported by the fixed lower fidelity data. Then the moment matching method is applied to approximate the marginal prior of conditional DGP with a GP. The obtained effective kernels are implicit functions of the lower-fidelity data, manifesting the expressivity contributed by distribution propagation within the hierarchy. The hyperparameters are learned via optimizing the approximate marginal likelihood. Experiments with synthetic and high dimensional data show comparable performance against other multi-fidelity regression methods, variational inference, and multi-output GP. We conclude that, with the low fidelity data and the hierarchical DGP structure, the effective kernel encodes the inductive bias for true function allowing the compositional freedom. Full article
(This article belongs to the Special Issue Probabilistic Methods for Deep Learning)
Show Figures

Figure 1

Article
Intrinsic Entropy of Squeezed Quantum Fields and Nonequilibrium Quantum Dynamics of Cosmological Perturbations
Entropy 2021, 23(11), 1544; https://doi.org/10.3390/e23111544 - 20 Nov 2021
Cited by 7 | Viewed by 617
Abstract
Density contrasts in the universe are governed by scalar cosmological perturbations which, when expressed in terms of gauge-invariant variables, contain a classical component from scalar metric perturbations and a quantum component from inflaton field fluctuations. It has long been known that the effect [...] Read more.
Density contrasts in the universe are governed by scalar cosmological perturbations which, when expressed in terms of gauge-invariant variables, contain a classical component from scalar metric perturbations and a quantum component from inflaton field fluctuations. It has long been known that the effect of cosmological expansion on a quantum field amounts to squeezing. Thus, the entropy of cosmological perturbations can be studied by treating them in the framework of squeezed quantum systems. Entropy of a free quantum field is a seemingly simple yet subtle issue. In this paper, different from previous treatments, we tackle this issue with a fully developed nonequilibrium quantum field theory formalism for such systems. We compute the covariance matrix elements of the parametric quantum field and solve for the evolution of the density matrix elements and the Wigner functions, and, from them, derive the von Neumann entropy. We then show explicitly why the entropy for the squeezed yet closed system is zero, but is proportional to the particle number produced upon coarse-graining out the correlation between the particle pairs. We also construct the bridge between our quantum field-theoretic results and those using the probability distribution of classical stochastic fields by earlier authors, preserving some important quantum properties, such as entanglement and coherence, of the quantum field. Full article
(This article belongs to the Special Issue Entropy Measures and Applications in Astrophysics)
Show Figures

Figure 1

Article
Assumption-Free Derivation of the Bell-Type Criteria of Contextuality/Nonlocality
Entropy 2021, 23(11), 1543; https://doi.org/10.3390/e23111543 - 19 Nov 2021
Viewed by 538
Abstract
Bell-type criteria of contextuality/nonlocality can be derived without any falsifiable assumptions, such as context-independent mapping (or local causality), free choice, or no-fine-tuning. This is achieved by deriving Bell-type criteria for inconsistently connected systems (i.e., those with disturbance/signaling), based on the generalized definition of [...] Read more.
Bell-type criteria of contextuality/nonlocality can be derived without any falsifiable assumptions, such as context-independent mapping (or local causality), free choice, or no-fine-tuning. This is achieved by deriving Bell-type criteria for inconsistently connected systems (i.e., those with disturbance/signaling), based on the generalized definition of contextuality in the contextuality-by-default approach, and then specializing these criteria to consistently connected systems. Full article
Review
Role-Aware Information Spread in Online Social Networks
Entropy 2021, 23(11), 1542; https://doi.org/10.3390/e23111542 - 19 Nov 2021
Cited by 2 | Viewed by 902
Abstract
Understanding the complex process of information spread in online social networks (OSNs) enables the efficient maximization/minimization of the spread of useful/harmful information. Users assume various roles based on their behaviors while engaging with information in these OSNs. Recent reviews on information spread in [...] Read more.
Understanding the complex process of information spread in online social networks (OSNs) enables the efficient maximization/minimization of the spread of useful/harmful information. Users assume various roles based on their behaviors while engaging with information in these OSNs. Recent reviews on information spread in OSNs have focused on algorithms and challenges for modeling the local node-to-node cascading paths of viral information. However, they neglected to analyze non-viral information with low reach size that can also spread globally beyond OSN edges (links) via non-neighbors through, for example, pushed information via content recommendation algorithms. Previous reviews have also not fully considered user roles in the spread of information. To address these gaps, we: (i) provide a comprehensive survey of the latest studies on role-aware information spread in OSNs, also addressing the different temporal spreading patterns of viral and non-viral information; (ii) survey modeling approaches that consider structural, non-structural, and hybrid features, and provide a taxonomy of these approaches; (iii) review software platforms for the analysis and visualization of role-aware information spread in OSNs; and (iv) describe how information spread models enable useful applications in OSNs such as detecting influential users. We conclude by highlighting future research directions for studying information spread in OSNs, accounting for dynamic user roles. Full article
(This article belongs to the Special Issue Role-Aware Analysis of Complex Networks)
Show Figures

Figure 1

Article
An Approach to Growth Delimitation of Straight Line Segment Classifiers Based on a Minimum Bounding Box
Entropy 2021, 23(11), 1541; https://doi.org/10.3390/e23111541 - 19 Nov 2021
Viewed by 525
Abstract
Several supervised machine learning algorithms focused on binary classification for solving daily problems can be found in the literature. The straight-line segment classifier stands out for its low complexity and competitiveness, compared to well-knownconventional classifiers. This binary classifier is based on distances between [...] Read more.
Several supervised machine learning algorithms focused on binary classification for solving daily problems can be found in the literature. The straight-line segment classifier stands out for its low complexity and competitiveness, compared to well-knownconventional classifiers. This binary classifier is based on distances between points and two labeled sets of straight-line segments. Its training phase consists of finding the placement of labeled straight-line segment extremities (and consequently, their lengths) which gives the minimum mean square error. However, during the training phase, the straight-line segment lengths can grow significantly, giving a negative impact on the classification rate. Therefore, this paper proposes an approach for adjusting the placements of labeled straight-line segment extremities to build reliable classifiers in a constrained search space (tuned by a scale factor parameter) in order to restrict their lengths. Ten artificial and eight datasets from the UCI Machine Learning Repository were used to prove that our approach shows promising results, compared to other classifiers. We conclude that this classifier can be used in industry for decision-making problems, due to the straightforward interpretation and classification rates. Full article
(This article belongs to the Special Issue Machine Learning Ecosystems: Opportunities and Threats)
Show Figures

Figure 1

Article
Entropy-Based Shear Stress Distribution in Open Channel for All Types of Flow Using Experimental Data
Entropy 2021, 23(11), 1540; https://doi.org/10.3390/e23111540 - 19 Nov 2021
Viewed by 458
Abstract
Korean river design standards set general design standards for rivers and river-related projects in Korea, which systematize the technologies and methods involved in river-related projects. This includes measurement methods for parts necessary for river design, but does not include information on shear stress. [...] Read more.
Korean river design standards set general design standards for rivers and river-related projects in Korea, which systematize the technologies and methods involved in river-related projects. This includes measurement methods for parts necessary for river design, but does not include information on shear stress. Shear stress is one of the factors necessary for river design and operation. Shear stress is one of the most important hydraulic factors used in the fields of water, especially for artificial channel design. Shear stress is calculated from the frictional force caused by viscosity and fluctuating fluid velocity. Current methods are based on past calculations, but factors such as boundary shear stress or energy gradient are difficult to actually measure or estimate. The point velocity throughout the entire cross-section is needed to calculate the velocity gradient. In other words, the current Korean river design standards use tractive force and critical tractive force instead of shear stress because it is more difficult to calculate the shear stress in the current method. However, it is difficult to calculate the exact value due to the limitations of the formula to obtain the river factor called the tractive force. In addition, tractive force has limitations that use an empirically identified base value for use in practice. This paper focuses on the modeling of shear-stress distribution in open channel turbulent flow using entropy theory. In addition, this study suggests a shear stress distribution formula, which can easily be used in practice after calculating the river-specific factor T. The tractive force and critical tractive force in the Korean river design standards should be modified by the shear stress obtained by the proposed shear stress distribution method. The present study therefore focuses on the modeling of shear stress distribution in an open channel turbulent flow using entropy theory. The shear stress distribution model is tested using a wide range of forty-two experimental runs collected from the literature. Then, an error analysis is performed to further evaluate the accuracy of the proposed model. The results reveal a correlation coefficient of approximately 0.95–0.99, indicating that the proposed method can estimate shear-stress distribution accurately. Based on this, the results of the distribution of shear stress after calculating the river-specific factors show a correlation coefficient of about 0.86 to 0.98, which suggests that the equation can be applied in practice. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
First Integrals of Shear-Free Fluids and Complexity
Entropy 2021, 23(11), 1539; https://doi.org/10.3390/e23111539 - 19 Nov 2021
Cited by 2 | Viewed by 413
Abstract
A single master equation governs the behaviour of shear-free neutral perfect fluid distributions arising in gravity theories. In this paper, we study the integrability of yxx=f(x)y2, find new solutions, and generate a new [...] Read more.
A single master equation governs the behaviour of shear-free neutral perfect fluid distributions arising in gravity theories. In this paper, we study the integrability of yxx=f(x)y2, find new solutions, and generate a new first integral. The first integral is subject to an integrability condition which is an integral equation which restricts the function f(x). We find that the integrability condition can be written as a third order differential equation whose solution can be expressed in terms of elementary functions and elliptic integrals. The solution of the integrability condition is generally given parametrically. A particular form of f(x)1x511x15/7 which corresponds to repeated roots of a cubic equation is given explicitly, which is a new result. Our investigation demonstrates that complexity of a self-gravitating shear-free fluid is related to the existence of a first integral, and this may be extendable to general matter distributions. Full article
(This article belongs to the Special Issue Complexity of Self-Gravitating Systems)
Article
Tight and Scalable Side-Channel Attack Evaluations through Asymptotically Optimal Massey-like Inequalities on Guessing Entropy
Entropy 2021, 23(11), 1538; https://doi.org/10.3390/e23111538 - 18 Nov 2021
Cited by 1 | Viewed by 614
Abstract
The bounds presented at CHES 2017 based on Massey’s guessing entropy represent the most scalable side-channel security evaluation method to date. In this paper, we present an improvement of this method, by determining the asymptotically optimal Massey-like inequality and then further refining it [...] Read more.
The bounds presented at CHES 2017 based on Massey’s guessing entropy represent the most scalable side-channel security evaluation method to date. In this paper, we present an improvement of this method, by determining the asymptotically optimal Massey-like inequality and then further refining it for finite support distributions. The impact of these results is highlighted for side-channel attack evaluations, demonstrating the improvements over the CHES 2017 bounds. Full article
(This article belongs to the Special Issue Types of Entropies and Divergences with Their Applications)
Show Figures

Figure 1

Article
Target Classification Method of Tactile Perception Data with Deep Learning
Entropy 2021, 23(11), 1537; https://doi.org/10.3390/e23111537 - 18 Nov 2021
Viewed by 688
Abstract
In order to improve the accuracy of manipulator operation, it is necessary to install a tactile sensor on the manipulator to obtain tactile information and accurately classify a target. However, with the increase in the uncertainty and complexity of tactile sensing data characteristics, [...] Read more.
In order to improve the accuracy of manipulator operation, it is necessary to install a tactile sensor on the manipulator to obtain tactile information and accurately classify a target. However, with the increase in the uncertainty and complexity of tactile sensing data characteristics, and the continuous development of tactile sensors, typical machine-learning algorithms often cannot solve the problem of target classification of pure tactile data. Here, we propose a new model by combining a convolutional neural network and a residual network, named ResNet10-v1. We optimized the convolutional kernel, hyperparameters, and loss function of the model, and further improved the accuracy of target classification through the K-means clustering method. We verified the feasibility and effectiveness of the proposed method through a large number of experiments. We expect to further improve the generalization ability of this method and provide an important reference for the research in the field of tactile perception classification. Full article
Show Figures

Figure 1

Article
Bert-Enhanced Text Graph Neural Network for Classification
Entropy 2021, 23(11), 1536; https://doi.org/10.3390/e23111536 - 18 Nov 2021
Cited by 3 | Viewed by 1034
Abstract
Text classification is a fundamental research direction, aims to assign tags to text units. Recently, graph neural networks (GNN) have exhibited some excellent properties in textual information processing. Furthermore, the pre-trained language model also realized promising effects in many tasks. However, many text [...] Read more.
Text classification is a fundamental research direction, aims to assign tags to text units. Recently, graph neural networks (GNN) have exhibited some excellent properties in textual information processing. Furthermore, the pre-trained language model also realized promising effects in many tasks. However, many text processing methods cannot model a single text unit’s structure or ignore the semantic features. To solve these problems and comprehensively utilize the text’s structure information and semantic information, we propose a Bert-Enhanced text Graph Neural Network model (BEGNN). For each text, we construct a text graph separately according to the co-occurrence relationship of words and use GNN to extract text features. Moreover, we employ Bert to extract semantic features. The former part can take into account the structural information, and the latter can focus on modeling the semantic information. Finally, we interact and aggregate these two features of different granularity to get a more effective representation. Experiments on standard datasets demonstrate the effectiveness of BEGNN. Full article
Show Figures

Figure 1

Article
Modulo Periodic Poisson Stable Solutions of Quasilinear Differential Equations
Entropy 2021, 23(11), 1535; https://doi.org/10.3390/e23111535 - 18 Nov 2021
Cited by 2 | Viewed by 530
Abstract
In this paper, modulo periodic Poisson stable functions have been newly introduced. Quasilinear differential equations with modulo periodic Poisson stable coefficients are under investigation. The existence and uniqueness of asymptotically stable modulo periodic Poisson stable solutions have been proved. Numerical simulations, which illustrate [...] Read more.
In this paper, modulo periodic Poisson stable functions have been newly introduced. Quasilinear differential equations with modulo periodic Poisson stable coefficients are under investigation. The existence and uniqueness of asymptotically stable modulo periodic Poisson stable solutions have been proved. Numerical simulations, which illustrate the theoretical results are provided. Full article
Show Figures

Figure 1

Article
Sequential Learning of Principal Curves: Summarizing Data Streams on the Fly
Entropy 2021, 23(11), 1534; https://doi.org/10.3390/e23111534 - 18 Nov 2021
Cited by 2 | Viewed by 602
Abstract
When confronted with massive data streams, summarizing data with dimension reduction methods such as PCA raises theoretical and algorithmic pitfalls. A principal curve acts as a nonlinear generalization of PCA, and the present paper proposes a novel algorithm to automatically and sequentially learn [...] Read more.
When confronted with massive data streams, summarizing data with dimension reduction methods such as PCA raises theoretical and algorithmic pitfalls. A principal curve acts as a nonlinear generalization of PCA, and the present paper proposes a novel algorithm to automatically and sequentially learn principal curves from data streams. We show that our procedure is supported by regret bounds with optimal sublinear remainder terms. A greedy local search implementation (called slpc, for sequential learning principal curves) that incorporates both sleeping experts and multi-armed bandit ingredients is presented, along with its regret computation and performance on synthetic and real-life data. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

Article
Weak Singularities of the Isothermal Entropy Change as the Smoking Gun Evidence of Phase Transitions of Mixed-Spin Ising Model on a Decorated Square Lattice in Transverse Field
Entropy 2021, 23(11), 1533; https://doi.org/10.3390/e23111533 - 18 Nov 2021
Viewed by 390
Abstract
The magnetocaloric response of the mixed spin-1/2 and spin-S (S>1/2) Ising model on a decorated square lattice is thoroughly examined in presence of the transverse magnetic field within the generalized decoration-iteration transformation, which provides an exact [...] Read more.
The magnetocaloric response of the mixed spin-1/2 and spin-S (S>1/2) Ising model on a decorated square lattice is thoroughly examined in presence of the transverse magnetic field within the generalized decoration-iteration transformation, which provides an exact mapping relation with an effective spin-1/2 Ising model on a square lattice in a zero magnetic field. Temperature dependencies of the entropy and isothermal entropy change exhibit an outstanding singular behavior in a close neighborhood of temperature-driven continuous phase transitions, which can be additionally tuned by the applied transverse magnetic field. While temperature variations of the entropy display in proximity of the critical temperature Tc a striking energy-type singularity (TTc)log|TTc|, two analogous weak singularities can be encountered in the temperature dependence of the isothermal entropy change. The basic magnetocaloric measurement of the isothermal entropy change may accordingly afford the smoking gun evidence of continuous phase transitions. It is shown that the investigated model predominantly displays the conventional magnetocaloric effect with exception of a small range of moderate temperatures, which contrarily promotes the inverse magnetocaloric effect. It turns out that the temperature range inherent to the inverse magnetocaloric effect is gradually suppressed upon increasing of the spin magnitude S. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Article
How to Effectively Collect and Process Network Data for Intrusion Detection?
Entropy 2021, 23(11), 1532; https://doi.org/10.3390/e23111532 - 18 Nov 2021
Cited by 2 | Viewed by 635
Abstract
The number of security breaches in the cyberspace is on the rise. This threat is met with intensive work in the intrusion detection research community. To keep the defensive mechanisms up to date and relevant, realistic network traffic datasets are needed. The use [...] Read more.
The number of security breaches in the cyberspace is on the rise. This threat is met with intensive work in the intrusion detection research community. To keep the defensive mechanisms up to date and relevant, realistic network traffic datasets are needed. The use of flow-based data for machine-learning-based network intrusion detection is a promising direction for intrusion detection systems. However, many contemporary benchmark datasets do not contain features that are usable in the wild. The main contribution of this work is to cover the research gap related to identifying and investigating valuable features in the NetFlow schema that allow for effective, machine-learning-based network intrusion detection in the real world. To achieve this goal, several feature selection techniques have been applied on five flow-based network intrusion detection datasets, establishing an informative flow-based feature set. The authors’ experience with the deployment of this kind of system shows that to close the research-to-market gap, and to perform actual real-world application of machine-learning-based intrusion detection, a set of labeled data from the end-user has to be collected. This research aims at establishing the appropriate, minimal amount of data that is sufficient to effectively train machine learning algorithms in intrusion detection. The results show that a set of 10 features and a small amount of data is enough for the final model to perform very well. Full article
Show Figures

Figure 1

Article
Katz Fractal Dimension of Geoelectric Field during Severe Geomagnetic Storms
Entropy 2021, 23(11), 1531; https://doi.org/10.3390/e23111531 - 18 Nov 2021
Cited by 1 | Viewed by 673
Abstract
We are concerned with the time series resulting from the computed local horizontal geoelectric field, obtained with the aid of a 1-D layered Earth model based on local geomagnetic field measurements, for the full solar magnetic cycle of 1996–2019, covering the two consecutive [...] Read more.
We are concerned with the time series resulting from the computed local horizontal geoelectric field, obtained with the aid of a 1-D layered Earth model based on local geomagnetic field measurements, for the full solar magnetic cycle of 1996–2019, covering the two consecutive solar activity cycles 23 and 24. To our best knowledge, for the first time, the roughness of severe geomagnetic storms is considered by using a monofractal time series analysis of the Earth electric field. We show that during severe geomagnetic storms the Katz fractal dimension of the geoelectric field grows rapidly. Full article
Show Figures

Figure 1

Article
A Transnational and Transregional Study of the Impact and Effectiveness of Social Distancing for COVID-19 Mitigation
Entropy 2021, 23(11), 1530; https://doi.org/10.3390/e23111530 - 18 Nov 2021
Cited by 1 | Viewed by 611
Abstract
We present an analysis of the relationship between SARS-CoV-2 infection rates and a social distancing metric from data for all the states and most populous cities in the United States and Brazil, all the 22 European Economic Community countries and the United Kingdom. [...] Read more.
We present an analysis of the relationship between SARS-CoV-2 infection rates and a social distancing metric from data for all the states and most populous cities in the United States and Brazil, all the 22 European Economic Community countries and the United Kingdom. We discuss why the infection rate, instead of the effective reproduction number or growth rate of cases, is a proper choice to perform this analysis when considering a wide span of time. We obtain a strong Spearman’s rank order correlation between the social distancing metric and the infection rate in each locality. We show that mask mandates increase the values of Spearman’s correlation in the United States, where a mandate was adopted. We also obtain an explicit numerical relation between the infection rate and the social distancing metric defined in the present work. Full article
(This article belongs to the Special Issue Statistical Methods for Medicine and Health Sciences)
Show Figures

Figure 1

Article
Still No Free Lunches: The Price to Pay for Tighter PAC-Bayes Bounds
Entropy 2021, 23(11), 1529; https://doi.org/10.3390/e23111529 - 18 Nov 2021
Cited by 6 | Viewed by 752
Abstract
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), [...] Read more.
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), others are “cheap” (simply finite variance). As it is well known, the more you pay, the more you get: in other words, the most expensive models yield the more interesting bounds. Recent advances in robust statistics have investigated procedures to obtain tight bounds while keeping the cost of assumptions minimal. The present paper explores and exhibits what the limits are for obtaining tight probably approximately correct (PAC)-Bayes bounds in a robust setting for cheap models. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

Article
Constructal Optimization of Rectangular Microchannel Heat Sink with Porous Medium for Entropy Generation Minimization
Entropy 2021, 23(11), 1528; https://doi.org/10.3390/e23111528 - 17 Nov 2021
Cited by 3 | Viewed by 618
Abstract
A model of rectangular microchannel heat sink (MCHS) with porous medium (PM) is developed. Aspect ratio of heat sink (HS) cell and length-width ratio of HS are optimized by numerical simulation method for entropy generation minimization (EGM) according to constructal theory. The effects [...] Read more.
A model of rectangular microchannel heat sink (MCHS) with porous medium (PM) is developed. Aspect ratio of heat sink (HS) cell and length-width ratio of HS are optimized by numerical simulation method for entropy generation minimization (EGM) according to constructal theory. The effects of inlet Reynolds number (Re) of coolant, heat flux on bottom, porosity and volume proportion of PM on dimensionless entropy generation rate (DEGR) are analyzed. From the results, there are optimal aspect ratios to minimize DEGR. Given the initial condition, DEGR is 33.10% lower than its initial value after the aspect ratio is optimized. With the increase of Re, the optimal aspect ratio declines, and the minimum DEGR drops as well. DEGR gets larger and the optimal aspect ratio remains constant with the increasing of heat flux on bottom. For the different volume proportion of PM, the optimal aspect ratios are diverse, but the minimum DEGR almost stays unchanged. The twice minimized DEGR, which results from aspect ratio and length-width ratio optimized simultaneously, is 10.70% lower than the once minimized DEGR. For a rectangular bottom, a lower DEGR can be reached by choosing the proper direction of fluid flow. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics III)
Show Figures

Figure 1

Article
Limits to Perception by Quantum Monitoring with Finite Efficiency
Entropy 2021, 23(11), 1527; https://doi.org/10.3390/e23111527 - 17 Nov 2021
Cited by 2 | Viewed by 707
Abstract
We formulate limits to perception under continuous quantum measurements by comparing the quantum states assigned by agents that have partial access to measurement outcomes. To this end, we provide bounds on the trace distance and the relative entropy between the assigned state and [...] Read more.
We formulate limits to perception under continuous quantum measurements by comparing the quantum states assigned by agents that have partial access to measurement outcomes. To this end, we provide bounds on the trace distance and the relative entropy between the assigned state and the actual state of the system. These bounds are expressed solely in terms of the purity and von Neumann entropy of the state assigned by the agent, and are shown to characterize how an agent’s perception of the system is altered by access to additional information. We apply our results to Gaussian states and to the dynamics of a system embedded in an environment illustrated on a quantum Ising chain. Full article
(This article belongs to the Special Issue Quantum Darwinism and Friends)
Show Figures

Figure 1

Article
The Solvability of the Discrete Boundary Value Problem on the Half-Line
Entropy 2021, 23(11), 1526; https://doi.org/10.3390/e23111526 - 17 Nov 2021
Viewed by 406
Abstract
This paper provides conditions for the existence of a solution to the second-order nonlinear boundary value problem on the half-line of the form [...] Read more.
This paper provides conditions for the existence of a solution to the second-order nonlinear boundary value problem on the half-line of the form Δa(n)Δx(n)=f(n+1,x(n+1),Δx(n+1)),nN{0}, with αx(0)+βa(0)Δx(0)=0,x()=d, where d,α,βR, α2+β2>0. To achieve our goal, we use Schauder’s fixed-point theorem and the perturbation technique for a Fredholm operator of index 0. Moreover, we construct the necessary condition for the existence of a solution to the considered problem. Full article
(This article belongs to the Section Complexity)
Article
Entropy-Based Combined Metric for Automatic Objective Quality Assessment of Stitched Panoramic Images
Entropy 2021, 23(11), 1525; https://doi.org/10.3390/e23111525 - 17 Nov 2021
Cited by 2 | Viewed by 540
Abstract
Quality assessment of stitched images is an important element of many virtual reality and remote sensing applications where the panoramic images may be used as a background as well as for navigation purposes. The quality of stitched images may be decreased by several [...] Read more.
Quality assessment of stitched images is an important element of many virtual reality and remote sensing applications where the panoramic images may be used as a background as well as for navigation purposes. The quality of stitched images may be decreased by several factors, including geometric distortions, ghosting, blurring, and color distortions. Nevertheless, the specificity of such distortions is different than those typical for general-purpose image quality assessment. Therefore, the necessity of the development of new objective image quality metrics for such type of emerging applications becomes obvious. The method proposed in the paper is based on the combination of features used in some recently proposed metrics with the results of the local and global image entropy analysis. The results obtained applying the proposed combined metric have been verified using the ISIQA database, containing 264 stitched images of 26 scenes together with the respective subjective Mean Opinion Scores, leading to a significant increase of its correlation with subjective evaluation results. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop