Next Issue
Volume 25, January
Previous Issue
Volume 24, November
 
 

Entropy, Volume 24, Issue 12 (December 2022) – 152 articles

Cover Story (view full-size image): The studies used total correlation to describe functional connectivity among brain regions as a multivariate alternative to conventional pairwise measures. In particular, this work uses Correlation Explanation (CorEx) to estimate total correlation. To begin, we demonstrate that CorEx estimates of total correlation and clustering results are reliable when compared to ground truth. Second, the inferred large-scale connectivity network extracted from the more extensive open fMRI datasets is consistent with existing neuroscience studies but, interestingly, can estimate additional relations beyond pairwise regions. Finally, we show how the connectivity graphs based on total correlation can also be an effective tool to aid in the discovery of brain diseases. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Entropy Interpretation of Hadamard Type Fractional Operators: Fractional Cumulative Entropy
Entropy 2022, 24(12), 1852; https://doi.org/10.3390/e24121852 - 19 Dec 2022
Cited by 1 | Viewed by 619
Abstract
Interpretations of Hadamard-type fractional integral and differential operators are proposed. The Hadamard-type fractional integrals of function with respect to another function are interpreted as an generalization of standard entropy, fractional entropies and cumulative entropies. A family of fractional cumulative entropies is proposed by [...] Read more.
Interpretations of Hadamard-type fractional integral and differential operators are proposed. The Hadamard-type fractional integrals of function with respect to another function are interpreted as an generalization of standard entropy, fractional entropies and cumulative entropies. A family of fractional cumulative entropies is proposed by using the Hadamard-type fractional operators. Full article
(This article belongs to the Section Complexity)
Article
Exact Solution of a Time-Dependent Quantum Harmonic Oscillator with Two Frequency Jumps via the Lewis–Riesenfeld Dynamical Invariant Method
Entropy 2022, 24(12), 1851; https://doi.org/10.3390/e24121851 - 19 Dec 2022
Viewed by 514
Abstract
Harmonic oscillators with multiple abrupt jumps in their frequencies have been investigated by several authors during the last decades. We investigate the dynamics of a quantum harmonic oscillator with initial frequency ω0, which undergoes a sudden jump to a frequency [...] Read more.
Harmonic oscillators with multiple abrupt jumps in their frequencies have been investigated by several authors during the last decades. We investigate the dynamics of a quantum harmonic oscillator with initial frequency ω0, which undergoes a sudden jump to a frequency ω1 and, after a certain time interval, suddenly returns to its initial frequency. Using the Lewis–Riesenfeld method of dynamical invariants, we present expressions for the mean energy value, the mean number of excitations, and the transition probabilities, considering the initial state different from the fundamental. We show that the mean energy of the oscillator, after the jumps, is equal or greater than the one before the jumps, even when ω1<ω0. We also show that, for particular values of the time interval between the jumps, the oscillator returns to the same initial state. Full article
(This article belongs to the Special Issue Quantum Nonstationary Systems)
Show Figures

Figure 1

Article
Universal Non-Extensive Statistical Physics Temporal Pattern of Major Subduction Zone Aftershock Sequences
Entropy 2022, 24(12), 1850; https://doi.org/10.3390/e24121850 - 19 Dec 2022
Viewed by 368
Abstract
Large subduction-zone earthquakes generate long-lasting and wide-spread aftershock sequences. The physical and statistical patterns of these aftershock sequences are of considerable importance for better understanding earthquake dynamics and for seismic hazard assessments and earthquake risk mitigation. In this work, we analyzed the statistical [...] Read more.
Large subduction-zone earthquakes generate long-lasting and wide-spread aftershock sequences. The physical and statistical patterns of these aftershock sequences are of considerable importance for better understanding earthquake dynamics and for seismic hazard assessments and earthquake risk mitigation. In this work, we analyzed the statistical properties of 42 aftershock sequences in terms of their temporal evolution. These aftershock sequences followed recent large subduction-zone earthquakes of M ≥ 7.0 with focal depths less than 70 km that have occurred worldwide since 1976. Their temporal properties were analyzed by investigating the probability distribution of the interevent times between successive aftershocks in terms of non-extensive statistical physics (NESP). We demonstrate the presence of a crossover behavior from power-law (q ≠ 1) to exponential (q = 1) scaling for greater interevent times. The estimated entropic q-values characterizing the observed distributions range from 1.67 to 1.83. The q-exponential behavior, along with the crossover behavior observed for greater interevent times, are further discussed in terms of superstatistics and in view of a stochastic mechanism with memory effects, which could generate the observed scaling patterns of the interevent time evolution in earthquake aftershock sequences. Full article
(This article belongs to the Special Issue Complexity and Statistical Physics Approaches to Earthquakes)
Show Figures

Figure 1

Article
Estimating Gaussian Copulas with Missing Data with and without Expert Knowledge
Entropy 2022, 24(12), 1849; https://doi.org/10.3390/e24121849 - 19 Dec 2022
Viewed by 356
Abstract
In this work, we present a rigorous application of the Expectation Maximization algorithm to determine the marginal distributions and the dependence structure in a Gaussian copula model with missing data. We further show how to circumvent a priori assumptions on the marginals with [...] Read more.
In this work, we present a rigorous application of the Expectation Maximization algorithm to determine the marginal distributions and the dependence structure in a Gaussian copula model with missing data. We further show how to circumvent a priori assumptions on the marginals with semiparametric modeling. Further, we outline how expert knowledge on the marginals and the dependency structure can be included. A simulation study shows that the distribution learned through this algorithm is closer to the true distribution than that obtained with existing methods and that the incorporation of domain knowledge provides benefits. Full article
(This article belongs to the Special Issue Improving Predictive Models with Expert Knowledge)
Show Figures

Figure 1

Article
Dynamic Banking Systemic Risk Accumulation under Multiple-Risk Exposures
Entropy 2022, 24(12), 1848; https://doi.org/10.3390/e24121848 - 19 Dec 2022
Viewed by 416
Abstract
Much of the existing research on banking systemic risk focuses on static single-risk exposures, and there is a lack of research on multiple-risk exposures. The reality is that the banking system is facing an increasingly complex environment, and dynamic measures of multiple-risk integration [...] Read more.
Much of the existing research on banking systemic risk focuses on static single-risk exposures, and there is a lack of research on multiple-risk exposures. The reality is that the banking system is facing an increasingly complex environment, and dynamic measures of multiple-risk integration are essential. To reveal the risk accumulation process under the multi-risk exposures of the banking system, this article constructs a dynamic banking system as the research object and combines geometric Brownian motion, the BSM model, and the maximum likelihood estimate method. This article also aims to incorporate three types of exposures (interbank lending market risk exposures, entity industry credit risk exposures, and market risk exposures) within the same framework for the first time and builds a model of the dynamic evolution of banking systemic risk under multiple exposures. This study included the collection of a large amount of real data on banks, entity industries, and market risk factors, and used the ΔCoVaR model to evaluate the systemic risk of the China banking system from the point of view of the accumulation of risk from different exposures, revealing the dynamic process of risk accumulation under the integration of multiple risks within the banking system, as well as the contribution of different exposures to banking systemic risk. The results showed that the banking systemic risk of China first increased and then decreased with time, and the rate of risk accumulation is gradually slowing down. In terms of the impact of different kinds of exposures on system losses, the credit risk exposure of the entity industry had the greatest impact on the banking systemic risk among the three kinds of exposures. In terms of the contribution of the interbank lending market risk to the systemic risk, the Bank of Communications, China Everbright Bank, and Bank of Beijing contributed the most. In terms of the contribution of the bank–entity industry credit risk to the systemic risk, the financial industry, accommodation and catering industry, and manufacturing industry contributed the most. Considering the contribution of market risk to the systemic risk, the Shanghai Composite Index, the Hang Seng Composite Index, and the Dow Jones Index contributed the most. The research in this paper enriches the existing banking systemic risk research perspective and provides a reference for the regulatory decisions of central banks. Full article
(This article belongs to the Special Issue Complex Network Analysis in Econometrics)
Show Figures

Figure 1

Article
Effect of Back Pressure on Performances and Key Geometries of the Second Stage in a Highly Coupled Two-Stage Ejector
Entropy 2022, 24(12), 1847; https://doi.org/10.3390/e24121847 - 18 Dec 2022
Viewed by 450
Abstract
In this paper, for a highly coupled two-stage ejector-based cooling cycle, the optimization of primary nozzle length and angle of the second-stage ejector under varied primary nozzle diameters of the second stage was conducted first. Next, the evaluation for the influence of variable [...] Read more.
In this paper, for a highly coupled two-stage ejector-based cooling cycle, the optimization of primary nozzle length and angle of the second-stage ejector under varied primary nozzle diameters of the second stage was conducted first. Next, the evaluation for the influence of variable back pressure on ER of the two-stage ejector was performed. Last, the identification of the effect of the variable back pressure on the key geometries of the two-stage ejector was carried out. The results revealed that: (1) with the increase of the nozzle diameter at the second stage, the ER of both stages decreased with the increases of the length and angle of the converging section of the second-stage primary nozzle; (2) the pressure lift ratio range of the second-stage ejector in the critical mode gradually increased with the increase of the nozzle diameter of the second-stage; (3) when the pressure lift ratio increased from 102% to 106%, the peak ER of the second-stage decreased, and the influence of the area ratio and nozzle exit position of the second-stage ejector on its ER was reduced; (4) with the increase of nozzle diameter of the second-stage, the influence of area ratio and nozzle exit position of the second-stage on the second-stage performance decreased; and (5) the optimal AR of the second stage decreased but the optimal nozzle exit position of the second stage kept constant with the pressure lift ratio of the two-stage ejector. Full article
(This article belongs to the Special Issue Entropy and Exergy Analysis in Ejector-Based Systems)
Show Figures

Figure 1

Article
Initial Solution Generation and Diversified Variable Picking in Local Search for (Weighted) Partial MaxSAT
Entropy 2022, 24(12), 1846; https://doi.org/10.3390/e24121846 - 18 Dec 2022
Viewed by 422
Abstract
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the [...] Read more.
The (weighted) partial maximum satisfiability ((W)PMS) problem is an important generalization of the classic problem of propositional (Boolean) satisfiability with a wide range of real-world applications. In this paper, we propose an initialization and a diversification strategy to improve local search for the (W)PMS problem. Our initialization strategy is based on a novel definition of variables’ structural entropy, and it aims to generate a solution that is close to a high-quality feasible one. Then, our diversification strategy picks a variable in two possible ways, depending on a parameter: continuing to pick variables with the best benefits or focusing on a clause with the greatest penalty and then selecting variables probabilistically. Based on these strategies, we developed a local search solver dubbed ImSATLike, as well as a hybrid solver ImSATLike-TT, and experimental results on (weighted) partial MaxSAT instances in recent MaxSAT Evaluations show that they outperform or have nearly the same performances as state-of-the-art local search and hybrid competitors, respectively, in general. Furthermore, we carried out experiments to confirm the individual impacts of each proposed strategy. Full article
Article
Newton Recursion Based Random Data-Reusing Generalized Maximum Correntropy Criterion Adaptive Filtering Algorithm
Entropy 2022, 24(12), 1845; https://doi.org/10.3390/e24121845 - 18 Dec 2022
Viewed by 546
Abstract
For system identification under impulsive-noise environments, the gradient-based generalized maximum correntropy criterion (GB-GMCC) algorithm can achieve a desirable filtering performance. However, the gradient method only uses the information of the first-order derivative, and the corresponding stagnation point of the method can be a [...] Read more.
For system identification under impulsive-noise environments, the gradient-based generalized maximum correntropy criterion (GB-GMCC) algorithm can achieve a desirable filtering performance. However, the gradient method only uses the information of the first-order derivative, and the corresponding stagnation point of the method can be a maximum point, a minimum point or a saddle point, and thus the gradient method may not always be a good selection. Furthermore, GB-GMCC merely uses the current input signal to update the weight vector; facing the highly correlated input signal, the convergence rate of GB-GMCC will be dramatically damaged. To overcome these problems, based on the Newton recursion method and the data-reusing method, this paper proposes a robust adaptive filtering algorithm, which is called the Newton recursion-based data-reusing GMCC (NR-DR-GMCC). On the one hand, based on the Newton recursion method, NR-DR-GMCC can use the information of the second-order derivative to update the weight vector. On the other hand, by using the data-reusing method, our proposal uses the information of the latest M input vectors to improve the convergence performance of GB-GMCC. In addition, to further enhance the filtering performance of NR-DR-GMCC, a random strategy can be used to extract more information from the past M input vectors, and thus we obtain an enhanced NR-DR-GMCC algorithm, which is called the Newton recursion-based random data-reusing GMCC (NR-RDR-GMCC) algorithm. Compared with existing algorithms, simulation results under system identification and acoustic echo cancellation are conducted and validate that NR-RDR-GMCC can provide a better filtering performance in terms of filtering accuracy and convergence rate. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
A Finite Element Approximation for Nematic Liquid Crystal Flow with Stretching Effect Based on Nonincremental Pressure-Correction Method
Entropy 2022, 24(12), 1844; https://doi.org/10.3390/e24121844 - 18 Dec 2022
Viewed by 398
Abstract
In this paper, a new decoupling method is proposed to solve a nematic liquid crystal flow with stretching effect. In the finite element discrete framework, the director vector is calculated by introducing a new auxiliary variable w, and the velocity vector and scalar [...] Read more.
In this paper, a new decoupling method is proposed to solve a nematic liquid crystal flow with stretching effect. In the finite element discrete framework, the director vector is calculated by introducing a new auxiliary variable w, and the velocity vector and scalar pressure are decoupled by a nonincremental pressure-correction projection method. Then, the energy dissipation law and unconditional energy stability of the resulting system are given. Finally, some numerical examples are given to verify the effects of various parameters on the singularity annihilation, stability and accuracy in space and time. Full article
Show Figures

Figure 1

Article
ECSS: High-Embedding-Capacity Audio Watermarking with Diversity Reception
Entropy 2022, 24(12), 1843; https://doi.org/10.3390/e24121843 - 17 Dec 2022
Viewed by 432
Abstract
Digital audio watermarking is a promising technology for copyright protection, yet its low embedding capacity remains a challenge for widespread applications. In this paper, the spread-spectrum watermarking algorithm is viewed as a communication channel, and the embedding capacity is analyzed and modeled with [...] Read more.
Digital audio watermarking is a promising technology for copyright protection, yet its low embedding capacity remains a challenge for widespread applications. In this paper, the spread-spectrum watermarking algorithm is viewed as a communication channel, and the embedding capacity is analyzed and modeled with information theory. Following this embedding capacity model, we propose the extended-codebook spread-spectrum (ECSS) watermarking algorithm to heighten the embedding capacity. In addition, the diversity reception (DR) mechanism is adopted to optimize the proposed algorithm to obtain both high embedding capacity and strong robustness while the imperceptibility is guaranteed. We experimentally verify the effectiveness of the ECSS algorithm and the DR mechanism, evaluate the performance of the proposed algorithm against common signal processing attacks, and compare the performance with existing high-capacity algorithms. The experiments demonstrate that the proposed algorithm achieves a high embedding capacity with applicable imperceptibility and robustness. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
EXK-SC: A Semantic Communication Model Based on Information Framework Expansion and Knowledge Collision
Entropy 2022, 24(12), 1842; https://doi.org/10.3390/e24121842 - 17 Dec 2022
Viewed by 438
Abstract
Semantic communication is not focused on improving the accuracy of transmitted symbols, but is concerned with expressing the expected meaning that the symbol sequence exactly carries. However, the measurement of semantic messages and their corresponding codebook generation are still open issues. Expansion, which [...] Read more.
Semantic communication is not focused on improving the accuracy of transmitted symbols, but is concerned with expressing the expected meaning that the symbol sequence exactly carries. However, the measurement of semantic messages and their corresponding codebook generation are still open issues. Expansion, which integrates simple things into a complex system and even generates intelligence, is truly consistent with the evolution of the human language system. We apply this idea to the semantic communication system, quantifying semantic transmission by symbol sequences and investigating the semantic information system in a similar way as Shannon’s method for digital communication systems. This work is the first to discuss semantic expansion and knowledge collision in the semantic information framework. Some important theoretical results are presented, including the relationship between semantic expansion and the transmission information rate. We believe such a semantic information framework may provide a new paradigm for semantic communications, and semantic expansion and knowledge collision will be the cornerstone of semantic information theory. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

Article
SINR- and MI-Based Double-Robust Waveform Design
Entropy 2022, 24(12), 1841; https://doi.org/10.3390/e24121841 - 17 Dec 2022
Viewed by 385
Abstract
Owing to cognitive radar breaking the open-loop receiving–transmitting mode of traditional radar, adaptive waveform design for cognitive radar has become a central issue in radar system research. In this paper, the method of radar transmitted waveform design in the presence of clutter is [...] Read more.
Owing to cognitive radar breaking the open-loop receiving–transmitting mode of traditional radar, adaptive waveform design for cognitive radar has become a central issue in radar system research. In this paper, the method of radar transmitted waveform design in the presence of clutter is studied. Since exact characterizations of the target and clutter spectra are uncommon in practice, a single-robust transmitted waveform design method is introduced to solve the problem of the imprecise target spectrum or the imprecise clutter spectrum. Furthermore, considering that radar cannot simultaneously obtain precise target and clutter spectra, a novel double-robust transmitted waveform design method is proposed. In this method, the signal-to-interference-plus-noise ratio and mutual information are used as the objective functions, and the optimization models for the double-robust waveform are established under the transmitted energy constraint. The Lagrange multiplier method was used to solve the optimal double-robust transmitted waveform. The simulation results show that the double-robust transmitted waveform can maximize SINR and MI in the worst case; the performance of SINR and MI will degrade if other transmitted waveforms are employed in the radar system. Full article
Show Figures

Figure 1

Article
Evidence of Critical Dynamics in Movements of Bees inside a Hive
Entropy 2022, 24(12), 1840; https://doi.org/10.3390/e24121840 - 17 Dec 2022
Viewed by 643
Abstract
Social insects such as honey bees exhibit complex behavioral patterns, and their distributed behavioral coordination enables decision-making at the colony level. It has, therefore, been proposed that a high-level description of their collective behavior might share commonalities with the dynamics of neural processes [...] Read more.
Social insects such as honey bees exhibit complex behavioral patterns, and their distributed behavioral coordination enables decision-making at the colony level. It has, therefore, been proposed that a high-level description of their collective behavior might share commonalities with the dynamics of neural processes in brains. Here, we investigated this proposal by focusing on the possibility that brains are poised at the edge of a critical phase transition and that such a state is enabling increased computational power and adaptability. We applied mathematical tools developed in computational neuroscience to a dataset of bee movement trajectories that were recorded within the hive during the course of many days. We found that certain characteristics of the activity of the bee hive system are consistent with the Ising model when it operates at a critical temperature, and that the system’s behavioral dynamics share features with the human brain in the resting state. Full article
(This article belongs to the Special Issue Statistical Physics of Collective Behavior)
Show Figures

Figure 1

Article
Multi-Task Learning for Compositional Data via Sparse Network Lasso
Entropy 2022, 24(12), 1839; https://doi.org/10.3390/e24121839 - 17 Dec 2022
Viewed by 416
Abstract
Multi-task learning is a statistical methodology that aims to improve the generalization performances of estimation and prediction tasks by sharing common information among multiple tasks. On the other hand, compositional data consist of proportions as components summing to one. Because components of compositional [...] Read more.
Multi-task learning is a statistical methodology that aims to improve the generalization performances of estimation and prediction tasks by sharing common information among multiple tasks. On the other hand, compositional data consist of proportions as components summing to one. Because components of compositional data depend on each other, existing methods for multi-task learning cannot be directly applied to them. In the framework of multi-task learning, a network lasso regularization enables us to consider each sample as a single task and construct different models for each one. In this paper, we propose a multi-task learning method for compositional data using a sparse network lasso. We focus on a symmetric form of the log-contrast model, which is a regression model with compositional covariates. Our proposed method enables us to extract latent clusters and relevant variables for compositional data by considering relationships among samples. The effectiveness of the proposed method is evaluated through simulation studies and application to gut microbiome data. Both results show that the prediction accuracy of our proposed method is better than existing methods when information about relationships among samples is appropriately obtained. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Design of Adaptive Fractional-Order Fixed-Time Sliding Mode Control for Robotic Manipulators
Entropy 2022, 24(12), 1838; https://doi.org/10.3390/e24121838 - 16 Dec 2022
Viewed by 561
Abstract
In this investigation, the adaptive fractional-order non-singular fixed-time terminal sliding mode (AFoFxNTSM) control for the uncertain dynamics of robotic manipulators with external disturbances is introduced. The idea of fractional-order non-singular fixed-time terminal sliding mode (FoFxNTSM) control is presented as the initial step. This [...] Read more.
In this investigation, the adaptive fractional-order non-singular fixed-time terminal sliding mode (AFoFxNTSM) control for the uncertain dynamics of robotic manipulators with external disturbances is introduced. The idea of fractional-order non-singular fixed-time terminal sliding mode (FoFxNTSM) control is presented as the initial step. This approach, which combines the benefits of a fractional-order parameter with the advantages of NTSM, gives rapid fixed-time convergence, non-singularity, and chatter-free control inputs. After that, an adaptive control strategy is merged with the FoFxNTSM, and the resulting model is given the label AFoFxNTSM. This is done in order to account for the unknown dynamics of the system, which are caused by uncertainties and bounded external disturbances. The Lyapunov analysis reveals how stable the closed-loop system is over a fixed time. The pertinent simulation results are offered here for the purposes of evaluating and illustrating the performance of the suggested scheme applied on a PUMA 560 robot. Full article
(This article belongs to the Special Issue Nonlinear Control Systems with Recent Advances and Applications)
Show Figures

Figure 1

Perspective
Noise Enhancement of Neural Information Processing
Entropy 2022, 24(12), 1837; https://doi.org/10.3390/e24121837 - 16 Dec 2022
Viewed by 397
Abstract
Cortical neurons in vivo function in highly fluctuating and seemingly noisy conditions, and the understanding of how information is processed in such complex states is still incomplete. In this perspective article, we first overview that an intense “synaptic noise” was measured first in [...] Read more.
Cortical neurons in vivo function in highly fluctuating and seemingly noisy conditions, and the understanding of how information is processed in such complex states is still incomplete. In this perspective article, we first overview that an intense “synaptic noise” was measured first in single neurons, and computational models were built based on such measurements. Recent progress in recording techniques has enabled the measurement of highly complex activity in large numbers of neurons in animals and human subjects, and models were also built to account for these complex dynamics. Here, we attempt to link these two cellular and population aspects, where the complexity of network dynamics in awake cortex seems to link to the synaptic noise seen in single cells. We show that noise in single cells, in networks, or structural noise, all participate to enhance responsiveness and boost the propagation of information. We propose that such noisy states are fundamental to providing favorable conditions for information processing at large-scale levels in the brain, and may be involved in sensory perception. Full article
Show Figures

Figure 1

Article
A Parallel Multi-Modal Factorized Bilinear Pooling Fusion Method Based on the Semi-Tensor Product for Emotion Recognition
Entropy 2022, 24(12), 1836; https://doi.org/10.3390/e24121836 - 16 Dec 2022
Viewed by 435
Abstract
Multi-modal fusion can exploit complementary information from various modalities and improve the accuracy of prediction or classification tasks. In this paper, we propose a parallel, multi-modal, factorized, bilinear pooling method based on a semi-tensor product (STP) for information fusion in emotion recognition. Initially, [...] Read more.
Multi-modal fusion can exploit complementary information from various modalities and improve the accuracy of prediction or classification tasks. In this paper, we propose a parallel, multi-modal, factorized, bilinear pooling method based on a semi-tensor product (STP) for information fusion in emotion recognition. Initially, we apply the STP to factorize a high-dimensional weight matrix into two low-rank factor matrices without dimension matching constraints. Next, we project the multi-modal features to the low-dimensional matrices and perform multiplication based on the STP to capture the rich interactions between the features. Finally, we utilize an STP-pooling method to reduce the dimensionality to get the final features. This method can achieve the information fusion between modalities of different scales and dimensions and avoids data redundancy due to dimension matching. Experimental verification of the proposed method on the emotion-recognition task using the IEMOCAP and CMU-MOSI datasets showed a significant reduction in storage space and recognition time. The results also validate that the proposed method improves the performance and reduces both the training time and the number of parameters. Full article
(This article belongs to the Special Issue Advances in Uncertain Information Fusion)
Show Figures

Figure 1

Article
On the Quantization of AB Phase in Nonlinear Systems
Entropy 2022, 24(12), 1835; https://doi.org/10.3390/e24121835 - 16 Dec 2022
Viewed by 357
Abstract
Self-intersecting energy band structures in momentum space can be induced by nonlinearity at the mean-field level, with the so-called nonlinear Dirac cones as one intriguing consequence. Using the Qi-Wu-Zhang model plus power law nonlinearity, we systematically study in this paper the Aharonov–Bohm (AB) [...] Read more.
Self-intersecting energy band structures in momentum space can be induced by nonlinearity at the mean-field level, with the so-called nonlinear Dirac cones as one intriguing consequence. Using the Qi-Wu-Zhang model plus power law nonlinearity, we systematically study in this paper the Aharonov–Bohm (AB) phase associated with an adiabatic process in the momentum space, with two adiabatic paths circling around one nonlinear Dirac cone. Interestingly, for and only for Kerr nonlinearity, the AB phase experiences a jump of π at the critical nonlinearity at which the Dirac cone appears and disappears (thus yielding π-quantization of the AB phase so long as the nonlinear Dirac cone exists), whereas for all other powers of nonlinearity, the AB phase always changes continuously with the nonlinear strength. Our results may be useful for experimental measurement of power-law nonlinearity and shall motivate further fundamental interest in aspects of geometric phase and adiabatic following in nonlinear systems. Full article
Show Figures

Figure 1

Editorial
Ising Model: Recent Developments and Exotic Applications
Entropy 2022, 24(12), 1834; https://doi.org/10.3390/e24121834 - 15 Dec 2022
Viewed by 398
Abstract
Solving in his PhD thesis the one-dimensional version of a certain lattice model of ferromagnetism formulated by his supervisor Lenz [...] Full article
(This article belongs to the Special Issue Ising Model: Recent Developments and Exotic Applications)
Article
Penalty and Shrinkage Strategies Based on Local Polynomials for Right-Censored Partially Linear Regression
Entropy 2022, 24(12), 1833; https://doi.org/10.3390/e24121833 - 15 Dec 2022
Viewed by 610
Abstract
This study aims to propose modified semiparametric estimators based on six different penalty and shrinkage strategies for the estimation of a right-censored semiparametric regression model. In this context, the methods used to obtain the estimators are ridge, lasso, adaptive lasso, SCAD, MCP, and [...] Read more.
This study aims to propose modified semiparametric estimators based on six different penalty and shrinkage strategies for the estimation of a right-censored semiparametric regression model. In this context, the methods used to obtain the estimators are ridge, lasso, adaptive lasso, SCAD, MCP, and elasticnet penalty functions. The most important contribution that distinguishes this article from its peers is that it uses the local polynomial method as a smoothing method. The theoretical estimation procedures for the obtained estimators are explained. In addition, a simulation study is performed to see the behavior of the estimators and make a detailed comparison, and hepatocellular carcinoma data are estimated as a real data example. As a result of the study, the estimators based on adaptive lasso and SCAD were more resistant to censorship and outperformed the other four estimators. Full article
Show Figures

Figure 1

Article
4E Assessment of an Organic Rankine Cycle (ORC) Activated with Waste Heat of a Flash–Binary Geothermal Power Plant
Entropy 2022, 24(12), 1832; https://doi.org/10.3390/e24121832 - 15 Dec 2022
Viewed by 465
Abstract
In this paper, the 4E assessment (Energetic, Exergetic, Exergoeconomic and Exergoenvironmental) of a low-temperature ORC activated by two different alternatives is presented. The first alternative (S1) contemplates the activation of the ORC through the recovery of waste heat from a flash–binary geothermal power [...] Read more.
In this paper, the 4E assessment (Energetic, Exergetic, Exergoeconomic and Exergoenvironmental) of a low-temperature ORC activated by two different alternatives is presented. The first alternative (S1) contemplates the activation of the ORC through the recovery of waste heat from a flash–binary geothermal power plant. The second alternative (S2) contemplates the activation of the ORC using direct heat from a geothermal well. For both alternatives, the energetic and exergetic models were established. At the same time, the economic and environmental impact models were developed. Finally, based on the combination of the exergy concepts and the economic and ecological indicators, the exergoeconomic and exergoenvironmental performances of the ORC were obtained. The results show higher economic, exergoeconomic and exergoenvironmental profitability for S1. Besides, for the alternative S1, the ORC cycle has an acceptable economic profitability for a net power of 358.4 kW at a temperature of 110 °C, while for S2, this profitability starts being attractive for a power 2.65 times greater than S1 and with a temperature higher than 135 °C. In conclusion, the above represents an area of opportunity and a considerable advantage for the implementation of the ORC in the recovery of waste heat from flash–binary geothermal power plants. Full article
(This article belongs to the Topic Exergy Analysis and Its Applications – 2nd Volume)
Show Figures

Figure 1

Article
A Dual Adaptive Interaction Click-Through Rate Prediction Based on Attention Logarithmic Interaction Network
Entropy 2022, 24(12), 1831; https://doi.org/10.3390/e24121831 - 15 Dec 2022
Viewed by 480
Abstract
Click-through rate (CTR) prediction is crucial for computing advertisement and recommender systems. The key challenge of CTR prediction is to accurately capture user interests and deliver suitable advertisements to the right people. However, there are an immense number of features in CTR prediction [...] Read more.
Click-through rate (CTR) prediction is crucial for computing advertisement and recommender systems. The key challenge of CTR prediction is to accurately capture user interests and deliver suitable advertisements to the right people. However, there are an immense number of features in CTR prediction datasets, which hardly fit when only using an individual feature. To solve this problem, feature interaction that combines several features via an operation is introduced to enhance prediction performance. Many factorizations machine-based models and deep learning methods have been proposed to capture feature interaction for CTR prediction. They follow an enumeration-filter pattern that could not determine the appropriate order of feature interaction and useful feature interaction. The attention logarithmic network (ALN) is presented in this paper, which uses logarithmic neural networks (LNN) to model feature interactions, and the squeeze excitation (SE) mechanism to adaptively model the importance of higher-order feature interactions. At first, the embedding vector of the input was absolutized and a very small positive number was added to the zeros of the embedding vector, which made the LNN input positive. Then, the adaptive-order feature interactions were learned by logarithmic transformation and exponential transformation in the LNN. Finally, SE was applied to model the importance of high-order feature interactions adaptively for enhancing CTR performance. Based on this, the attention logarithmic interaction network (ALIN) was proposed for the effectiveness and accuracy of CTR, which integrated Newton’s identity into ALN. ALIN supplements the loss of information, which is caused by the operation becoming positive and by adding a small positive value to the embedding vector. Experiments are conducted on two datasets, and the results prove that ALIN is efficient and effective. Full article
(This article belongs to the Special Issue Entropy in Soft Computing and Machine Learning Algorithms II)
Show Figures

Figure 1

Article
Multidimensional Feature in Emotion Recognition Based on Multi-Channel EEG Signals
Entropy 2022, 24(12), 1830; https://doi.org/10.3390/e24121830 - 15 Dec 2022
Viewed by 505
Abstract
As a major daily task for the popularization of artificial intelligence technology, more and more attention has been paid to the scientific research of mental state electroencephalogram (EEG) in recent years. To retain the spatial information of EEG signals and fully mine the [...] Read more.
As a major daily task for the popularization of artificial intelligence technology, more and more attention has been paid to the scientific research of mental state electroencephalogram (EEG) in recent years. To retain the spatial information of EEG signals and fully mine the EEG timing-related information, this paper proposes a novel EEG emotion recognition method. First, to obtain the frequency, spatial, and temporal information of multichannel EEG signals more comprehensively, we choose the multidimensional feature structure as the input of the artificial neural network. Then, a neural network model based on depthwise separable convolution is proposed, extracting the input structure’s frequency and spatial features. The network can effectively reduce the computational parameters. Finally, we modeled using the ordered neuronal long short-term memory (ON-LSTM) network, which can automatically learn hierarchical information to extract deep emotional features hidden in EEG time series. The experimental results show that the proposed model can reasonably learn the correlation and temporal dimension information content between EEG multi-channel and improve emotion classification performance. We performed the experimental validation of this paper in two publicly available EEG emotional datasets. In the experiments on the DEAP dataset (a dataset for emotion analysis using EEG, physiological, and video signals), the mean accuracy of emotion recognition for arousal and valence is 95.02% and 94.61%, respectively. In the experiments on the SEED dataset (a dataset collection for various purposes using EEG signals), the average accuracy of emotion recognition is 95.49%. Full article
(This article belongs to the Special Issue Entropy Applications in Electroencephalography)
Show Figures

Figure 1

Article
Existence of Classical Solutions for Nonlinear Elliptic Equations with Gradient Terms
Entropy 2022, 24(12), 1829; https://doi.org/10.3390/e24121829 - 15 Dec 2022
Viewed by 407
Abstract
This paper deals with the existence of solutions of the elliptic equation with nonlinear gradient term Δu=f(x,u,u) on Ω restricted by the boundary condition u|Ω=0, [...] Read more.
This paper deals with the existence of solutions of the elliptic equation with nonlinear gradient term Δu=f(x,u,u) on Ω restricted by the boundary condition u|Ω=0, where Ω is a bounded domain in RN with sufficiently smooth boundary Ω, N2, and f:Ω¯×R×RNR is continuous. The existence results of classical solutions and positive solutions are obtained under some inequality conditions on the nonlinearity f(x,ξ,η) when |(ξ,η)| is small or large enough. Full article
(This article belongs to the Special Issue Nonlinear Dynamics and Analysis II)
Article
A New Deep Learning Method with Self-Supervised Learning for Delineation of the Electrocardiogram
Entropy 2022, 24(12), 1828; https://doi.org/10.3390/e24121828 - 15 Dec 2022
Viewed by 463
Abstract
Heartbeat characteristic points are the main features of an electrocardiogram (ECG), which can provide important information for ECG-based cardiac diagnosis. In this manuscript, we propose a self-supervised deep learning framework with modified Densenet to detect ECG characteristic points, including the onset, peak and [...] Read more.
Heartbeat characteristic points are the main features of an electrocardiogram (ECG), which can provide important information for ECG-based cardiac diagnosis. In this manuscript, we propose a self-supervised deep learning framework with modified Densenet to detect ECG characteristic points, including the onset, peak and termination points of P-wave, QRS complex wave and T-wave. We extracted high-level features of ECG heartbeats from the QT Database (QTDB) and two other larger datasets, MIT-BIH Arrhythmia Database (MITDB) and MIT-BIH Normal Sinus Rhythm Database (NSRDB) with no human-annotated labels as pre-training. By applying different transformations to ECG signals, the task of discriminating signals before and after transformation was defined as the pretext task. Subsequently, the convolutional layer was frozen and the weights of the self-supervised network were transferred to the downstream task of characteristic point localizations on heart beats in the QT dataset. Finally, the mean ± standard deviation of the detection errors of our proposed self-supervised learning method in QTDB for detecting the onset, peak, and termination points of P-waves, the onset and termination points of QRS waves, and the peak and termination points of T-waves were −0.24 ± 10.04, −0.48 ± 11.69, −0.28 ± 10.19, −3.72 ± 8.18, −4.12 ± 13.54, −0.68 ± 20.42, and 1.34 ± 21.04. The results show that the deep learning network based on the self-supervised framework constructed in this manuscript can accurately detect the feature points of a heartbeat, laying the foundation for automatic extraction of key information related to ECG-based diagnosis. Full article
Show Figures

Figure 1

Article
An Image Encryption Algorithm Using Cascade Chaotic Map and S-Box
Entropy 2022, 24(12), 1827; https://doi.org/10.3390/e24121827 - 14 Dec 2022
Viewed by 396
Abstract
This paper proposed an image algorithm based on a cascaded chaotic system to improve the performance of the encryption algorithm. Firstly, this paper proposed an improved cascaded two-dimensional map 2D-Cosine-Logistic-Sine map (2D-CLSM). Cascade chaotic system offers good advantages in terms of key space, [...] Read more.
This paper proposed an image algorithm based on a cascaded chaotic system to improve the performance of the encryption algorithm. Firstly, this paper proposed an improved cascaded two-dimensional map 2D-Cosine-Logistic-Sine map (2D-CLSM). Cascade chaotic system offers good advantages in terms of key space, complexity and sensitivity to initial conditions. By using the control parameters and initial values associated with the plaintext, the system generates two chaotic sequences associated with the plaintext image. Then, an S-box construction method is proposed, and an encryption method is designed based on the S-box. Encryption is divided into bit-level encryption and pixel-level encryption, and a diffusion method was devised to improve security and efficiency in bit-level encryption. Performance analysis shows that the encryption algorithm has good security and is easily resistant to various attacks. Full article
(This article belongs to the Special Issue Computational Imaging and Image Encryption with Entropy)
Show Figures

Figure 1

Article
Improved Gravitational Search Algorithm Based on Adaptive Strategies
Entropy 2022, 24(12), 1826; https://doi.org/10.3390/e24121826 - 14 Dec 2022
Viewed by 353
Abstract
The gravitational search algorithm is a global optimization algorithm that has the advantages of a swarm intelligence algorithm. Compared with traditional algorithms, the performance in terms of global search and convergence is relatively good, but the solution is not always accurate, and the [...] Read more.
The gravitational search algorithm is a global optimization algorithm that has the advantages of a swarm intelligence algorithm. Compared with traditional algorithms, the performance in terms of global search and convergence is relatively good, but the solution is not always accurate, and the algorithm has difficulty jumping out of locally optimal solutions. In view of these shortcomings, an improved gravitational search algorithm based on an adaptive strategy is proposed. The algorithm uses the adaptive strategy to improve the updating methods for the distance between particles, gravitational constant, and position in the gravitational search model. This strengthens the information interaction between particles in the group and improves the exploration and exploitation capacity of the algorithm. In this paper, 13 classical single-peak and multi-peak test functions were selected for simulation performance tests, and the CEC2017 benchmark function was used for a comparison test. The test results show that the improved gravitational search algorithm can address the tendency of the original algorithm to fall into local extrema and significantly improve both the solution accuracy and the ability to find the globally optimal solution. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
A New Model of Interval-Valued Intuitionistic Fuzzy Weighted Operators and Their Application in Dynamic Fusion Target Threat Assessment
Entropy 2022, 24(12), 1825; https://doi.org/10.3390/e24121825 - 14 Dec 2022
Viewed by 357
Abstract
Existing missile defense target threat assessment methods ignore the target timing and battlefield changes, leading to low assessment accuracy. In order to overcome this problem, a dynamic multi-time fusion target threat assessment method is proposed. In this method, a new interval valued intuitionistic [...] Read more.
Existing missile defense target threat assessment methods ignore the target timing and battlefield changes, leading to low assessment accuracy. In order to overcome this problem, a dynamic multi-time fusion target threat assessment method is proposed. In this method, a new interval valued intuitionistic fuzzy weighted averaging operator is proposed to effectively aggregate multi-source uncertain information; an interval-valued intuitionistic fuzzy entropy based on a cosine function (IVIFECF) is designed to determine the target attribute weight; an improved interval-valued intuitionistic fuzzy number distance measurement model is constructed to improve the discrimination of assessment results. Specifically, first of all, we define new interval-valued intuitionistic fuzzy operation rules based on algebraic operations. We use these rules to provide a new model of interval-valued intuitionistic fuzzy weighted arithmetic averaging (IVIFWAA) and geometric averaging (IVIFWGA) operators, and prove a number of algebraic properties of these operators. Then, considering the subjective and objective weights of the incoming target, a comprehensive weight model of target attributes based on IVIFECF is proposed, and the Poisson distribution method is used to solve the time series weights to process multi-time situation information. On this basis, the IVIFWAA and IVIFWGA operators are used to aggregate the decision information from multiple times and multiple decision makers. Finally, based on the improved TOPSIS method, the interval-valued intuitionistic fuzzy numbers are ordered, and the weighted multi-time fusion target threat assessment result is obtained. Simulation results of comparison show that the proposed method can effectively improve the reliability and accuracy of target threat assessment in missile defense. Full article
Show Figures

Figure 1

Article
An Efficient CRT-Base Power-of-Two Scaling in Minimally Redundant Residue Number System
Entropy 2022, 24(12), 1824; https://doi.org/10.3390/e24121824 - 14 Dec 2022
Viewed by 530
Abstract
In this paper, we consider one of the key problems in modular arithmetic. It is known that scaling in the residue number system (RNS) is a rather complicated non-modular procedure, which requires expensive and complex operations at each iteration. Hence, it is time [...] Read more.
In this paper, we consider one of the key problems in modular arithmetic. It is known that scaling in the residue number system (RNS) is a rather complicated non-modular procedure, which requires expensive and complex operations at each iteration. Hence, it is time consuming and needs too much hardware for implementation. We propose a novel approach to power-of-two scaling based on the Chinese Remainder Theorem (CRT) and rank form of the number representation in RNS. By using minimal redundancy of residue code, we optimize and speed up the rank calculation and parity determination of divisible integers in each iteration. The proposed enhancements make the power-of-two scaling simpler and faster than the currently known methods. After calculating the rank of the initial number, each iteration of modular scaling by two is performed in one modular clock cycle. The computational complexity of the proposed method of scaling by a constant Sl=2l associated with both required modular addition operations and lookup tables is estimeted as k and 2k+1, respectively, where k equals the number of primary non-redundant RNS moduli. The time complexity is log2k+l modular clock cycles. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Article
BPDGAN: A GAN-Based Unsupervised Back Project Dense Network for Multi-Modal Medical Image Fusion
Entropy 2022, 24(12), 1823; https://doi.org/10.3390/e24121823 - 14 Dec 2022
Viewed by 471
Abstract
Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron [...] Read more.
Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron emission computed tomography (PET) and single-photon emission computed tomography (SPECT) with anatomical modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) to supplement the complementary information. Meanwhile, fusing two anatomical images (like CT-MRI) is often required to replace single MRI, and the fused images can improve the efficiency and accuracy of clinical diagnosis. To this end, in order to achieve high-quality, high-resolution and rich-detail fusion without artificial prior, an unsupervised deep learning image fusion framework is proposed in this paper. It is named the back project dense generative adversarial network (BPDGAN) framework. In particular, we construct a novel network based on the back project dense block (BPDB) and convolutional block attention module (CBAM). The BPDB can effectively mitigate the impact of black backgrounds on image content. Conversely, the CBAM improves the performance of BPDGAN on the texture and edge information. To conclude, qualitative and quantitative experiments are tested to demonstrate the superiority of BPDGAN. In terms of quantitative metrics, BPDGAN outperforms the state-of-the-art comparisons by approximately 19.58%, 14.84%, 10.40% and 86.78% on AG, EI, Qabf and Qcv metrics, respectively. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop