Entropy doi: 10.3390/e23050614

Authors: Benjamin De Bari Alexandra Paxton Dilip K. Kondepudi Bruce A. Kay James A. Dixon

Coordination within and between organisms is one of the most complex abilities of living systems, requiring the concerted regulation of many physiological constituents, and this complexity can be particularly difficult to explain by appealing to physics. A valuable framework for understanding biological coordination is the coordinative structure, a self-organized assembly of physiological elements that collectively performs a specific function. Coordinative structures are characterized by three properties: (1) multiple coupled components, (2) soft-assembly, and (3) functional organization. Coordinative structures have been hypothesized to be specific instantiations of dissipative structures, non-equilibrium, self-organized, physical systems exhibiting complex pattern formation in structure and behaviors. We pursued this hypothesis by testing for these three properties of coordinative structures in an electrically-driven dissipative structure. Our system demonstrates dynamic reorganization in response to functional perturbation, a behavior of coordinative structures called reciprocal compensation. Reciprocal compensation is corroborated by a dynamical systems model of the underlying physics. This coordinated activity of the system appears to derive from the system’s intrinsic end-directed behavior to maximize the rate of entropy production. The paper includes three primary components: (1) empirical data on emergent coordinated phenomena in a physical system, (2) computational simulations of this physical system, and (3) theoretical evaluation of the empirical and simulated results in the context of physics and the life sciences. This study reveals similarities between an electrically-driven dissipative structure that exhibits end-directed behavior and the goal-oriented behaviors of more complex living systems.

]]>Entropy doi: 10.3390/e23050613

Authors: Haodong Li Fang Fang Zhiguo Ding

Multi-access edge computing (MEC) and non-orthogonal multiple access (NOMA) are regarded as promising technologies to improve the computation capability and offloading efficiency of mobile devices in the sixth-generation (6G) mobile system. This paper mainly focused on the hybrid NOMA-MEC system, where multiple users were first grouped into pairs, and users in each pair offloaded their tasks simultaneously by NOMA, then a dedicated time duration was scheduled to the more delay-tolerant user for uploading the remaining data by orthogonal multiple access (OMA). For the conventional NOMA uplink transmission, successive interference cancellation (SIC) was applied to decode the superposed signals successively according to the channel state information (CSI) or the quality of service (QoS) requirement. In this work, we integrated the hybrid SIC scheme, which dynamically adapts the SIC decoding order among all NOMA groups. To solve the user grouping problem, a deep reinforcement learning (DRL)-based algorithm was proposed to obtain a close-to-optimal user grouping policy. Moreover, we optimally minimized the offloading energy consumption by obtaining the closed-form solution to the resource allocation problem. Simulation results showed that the proposed algorithm converged fast, and the NOMA-MEC scheme outperformed the existing orthogonal multiple access (OMA) scheme.

]]>Entropy doi: 10.3390/e23050612

Authors: Anna Delmonte Alba Crescente Matteo Carrega Dario Ferraro Maura Sassetti

We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as possible initial conditions for the cavity, a Fock state, a coherent state, and a squeezed state. We show that the first state leads to better performances for the battery. However, a coherent state with the same average number of photons, even if it is affected by stronger fluctuations in the stored energy, results in quite interesting performance, in particular since it allows for almost completely extracting the stored energy as usable work at short enough times.

]]>Entropy doi: 10.3390/e23050611

Authors: Choe Sim Choo

In general, this new equation is significant for designing and operating a pipeline to predict flow discharge. In order to predict the flow discharge, accurate determination of the flow loss due to pipe friction is very important. However, existing pipe friction coefficient equations have difficulties in obtaining key variables or those only applicable to pipes with specific conditions. Thus, this study develops a new equation for predicting pipe friction coefficients using statistically based entropy concepts, which are currently being used in various fields. The parameters in the proposed equation can be easily obtained and are easy to estimate. Existing formulas for calculating pipe friction coefficient requires the friction head loss and Reynolds number. Unlike existing formulas, the proposed equation only requires pipe specifications, entropy value and average velocity. The developed equation can predict the friction coefficient by using the well-known entropy, the mean velocity and the pipe specifications. The comparison results with the Nikuradse’s experimental data show that the R2 and RMSE values were 0.998 and 0.000366 in smooth pipe, and 0.979 to 0.994 or 0.000399 to 0.000436 in rough pipe, and the discrepancy ratio analysis results show that the accuracy of both results in smooth and rough pipes is very close to zero. The proposed equation will enable the easier estimation of flow rates.

]]>Entropy doi: 10.3390/e23050610

Authors: Wang Jahanshahi Wang Bekiros Liu Aly

Although most of the early research studies on fractional-order systems were based on the Caputo or Riemann–Liouville fractional-order derivatives, it has recently been proven that these methods have some drawbacks. For instance, kernels of these methods have a singularity that occurs at the endpoint of an interval of definition. Thus, to overcome this issue, several new definitions of fractional derivatives have been introduced. The Caputo–Fabrizio fractional order is one of these nonsingular definitions. This paper is concerned with the analyses and design of an optimal control strategy for a Caputo–Fabrizio fractional-order model of the HIV/AIDS epidemic. The Caputo–Fabrizio fractional-order model of HIV/AIDS is considered to prevent the singularity problem, which is a real concern in the modeling of real-world systems and phenomena. Firstly, in order to find out how the population of each compartment can be controlled, sensitivity analyses were conducted. Based on the sensitivity analyses, the most effective agents in disease transmission and prevalence were selected as control inputs. In this way, a modified Caputo–Fabrizio fractional-order model of the HIV/AIDS epidemic is proposed. By changing the contact rate of susceptible and infectious people, the atraumatic restorative treatment rate of the treated compartment individuals, and the sexual habits of susceptible people, optimal control was designed. Lastly, simulation results that demonstrate the appropriate performance of the Caputo–Fabrizio fractional-order model and proposed control scheme are illustrated.

]]>Entropy doi: 10.3390/e23050609

Authors: Noor Sajid Laura Convertino Karl Friston

Biological forms depend on a progressive specialization of pluripotent stem cells. The differentiation of these cells in their spatial and functional environment defines the organism itself; however, cellular mutations may disrupt the mutual balance between a cell and its niche, where cell proliferation and specialization are released from their autopoietic homeostasis. This induces the construction of cancer niches and maintains their survival. In this paper, we characterise cancer niche construction as a direct consequence of interactions between clusters of cancer and healthy cells. Explicitly, we evaluate these higher-order interactions between niches of cancer and healthy cells using Kikuchi approximations to the free energy. Kikuchi’s free energy is measured in terms of changes to the sum of energies of baseline clusters of cells (or nodes) minus the energies of overcounted cluster intersections (and interactions of interactions, etc.). We posit that these changes in energy node clusters correspond to a long-term reduction in the complexity of the system conducive to cancer niche survival. We validate this formulation through numerical simulations of apoptosis, local cancer growth, and metastasis, and highlight its implications for a computational understanding of the etiopathology of cancer.

]]>Entropy doi: 10.3390/e23050608

Authors: Prasoon Kumar Vinodkumar Cagri Ozcinar Gholamreza Anbarjafari

CRISPR/Cas9 is a powerful genome-editing technology that has been widely applied in targeted gene repair and gene expression regulation. One of the main challenges for the CRISPR/Cas9 system is the occurrence of unexpected cleavage at some sites (off-targets) and predicting them is necessary due to its relevance in gene editing research. Very few deep learning models have been developed so far to predict the off-target propensity of single guide RNA (sgRNA) at specific DNA fragments by using artificial feature extract operations and machine learning techniques; however, this is a convoluted process that is difficult to understand and implement for researchers. In this research work, we introduce a novel graph-based approach to predict off-target efficacy of sgRNA in the CRISPR/Cas9 system that is easy to understand and replicate for researchers. This is achieved by creating a graph with sequences as nodes and by using a link prediction method to predict the presence of links between sgRNA and off-target inducing target DNA sequences. Features for the sequences are extracted from within the sequences. We used HEK293 and K562 t datasets in our experiments. GCN predicted the off-target gene knockouts (using link prediction) by predicting the links between sgRNA and off-target sequences with an auROC value of 0.987.

]]>Entropy doi: 10.3390/e23050607

Authors: Dehesa

The spreading of the stationary states of the multidimensional single-particle systems with a central potential is quantified by means of Heisenberg-like measures (radial and logarithmic expectation values) and entropy-like quantities (Fisher, Shannon, R\'enyi) of position and momentum probability densities. Since the potential is assumed to be analytically unknown, these dispersion and information-theoretical measures are given by means of inequality-type relations which are explicitly shown to depend on dimensionality and state's angular hyperquantum numbers. The spherical-symmetry and spin effects on these spreading properties are obtained by use of various integral inequalities (Daubechies--Thakkar, Lieb--Thirring, Redheffer--Weyl, ...) and a variational approach based on the extremization of entropy-like measures. Emphasis is placed on the uncertainty relations, upon which the essential reason of the probabilistic theory of quantum systems relies.

]]>Entropy doi: 10.3390/e23050606

Authors: Thomas Parr

Active inference is an increasingly prominent paradigm in theoretical biology. It frames the dynamics of living systems as if they were solving an inference problem. This rests upon their flow towards some (non-equilibrium) steady state—or equivalently, their maximisation of the Bayesian model evidence for an implicit probabilistic model. For many models, these self-evidencing dynamics manifest as messages passed among elements of a system. Such messages resemble synaptic communication at a neuronal network level but could also apply to other network structures. This paper attempts to apply the same formulation to biochemical networks. The chemical computation that occurs in regulation of metabolism relies upon sparse interactions between coupled reactions, where enzymes induce conditional dependencies between reactants. We will see that these reactions may be viewed as the movement of probability mass between alternative categorical states. When framed in this way, the master equations describing such systems can be reformulated in terms of their steady-state distribution. This distribution plays the role of a generative model, affording an inferential interpretation of the underlying biochemistry. Finally, we see that—in analogy with computational neurology and psychiatry—metabolic disorders may be characterized as false inference under aberrant prior beliefs.

]]>Entropy doi: 10.3390/e23050605

Authors: Elad Romanov Or Ordentlich

Motivated by applications in unsourced random access, this paper develops a novel scheme for the problem of compressed sensing of binary signals. In this problem, the goal is to design a sensing matrix A and a recovery algorithm, such that the sparse binary vector x can be recovered reliably from the measurements y=Ax+σz, where z is additive white Gaussian noise. We propose to design A as a parity check matrix of a low-density parity-check code (LDPC) and to recover x from the measurements y using a Markov chain Monte Carlo algorithm, which runs relatively fast due to the sparse structure of A. The performance of our scheme is comparable to state-of-the-art schemes, which use dense sensing matrices, while enjoying the advantages of using a sparse sensing matrix.

]]>Entropy doi: 10.3390/e23050604

Authors: Piotr Frąckiewicz

Over the last twenty years, quantum game theory has given us many ideas of how quantum games could be played. One of the most prominent ideas in the field is a model of quantum playing bimatrix games introduced by J. Eisert, M. Wilkens and M. Lewenstein. The scheme assumes that players’ strategies are unitary operations and the players act on the maximally entangled two-qubit state. The quantum nature of the scheme has been under discussion since the article by Eisert et al. came out. The aim of our paper was to identify some of non-classical features of the quantum scheme.

]]>Entropy doi: 10.3390/e23050603

Authors: Arthur Prat-Carrabin Florent Meyniel Misha Tsodyks Azeredo da Silveira

When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a `resource-rational’ (or `bounded-rational’) picture of seemingly irrational human cognition.

]]>Entropy doi: 10.3390/e23050602

Authors: Hongming Zhu Xiaowen Wang Yizhi Jiang Hongfei Fan Bowen Du Qin Liu

Instance matching is a key task in knowledge graph fusion, and it is critical to improving the efficiency of instance matching, given the increasing scale of knowledge graphs. Blocking algorithms selecting candidate instance pairs for comparison is one of the effective methods to achieve the goal. In this paper, we propose a novel blocking algorithm named MultiObJ, which constructs indexes for instances based on the Ordered Joint of Multiple Objects’ features to limit the number of candidate instance pairs. Based on MultiObJ, we further propose a distributed framework named Follow-the-Regular-Leader Instance Matching (FTRLIM), which matches instances between large-scale knowledge graphs with approximately linear time complexity. FTRLIM has participated in OAEI 2019 and achieved the best matching quality with significantly efficiency. In this research, we construct three data collections based on a real-world large-scale knowledge graph. Experiment results on the constructed data collections and two real-world datasets indicate that MultiObJ and FTRLIM outperform other state-of-the-art methods.

]]>Entropy doi: 10.3390/e23050601

Authors: Louis Anthony Cox

For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.

]]>Entropy doi: 10.3390/e23050600

Authors: António M. Lopes Jóse A. Tenreiro Machado

Time-series generated by complex systems (CS) are often characterized by phenomena such as chaoticity, fractality and memory effects, which pose difficulties in their analysis. The paper explores the dynamics of multidimensional data generated by a CS. The Dow Jones Industrial Average (DJIA) index is selected as a test-bed. The DJIA time-series is normalized and segmented into several time window vectors. These vectors are treated as objects that characterize the DJIA dynamical behavior. The objects are then compared by means of different distances to generate proper inputs to dimensionality reduction and information visualization algorithms. These computational techniques produce meaningful representations of the original dataset according to the (dis)similarities between the objects. The time is displayed as a parametric variable and the non-locality can be visualized by the corresponding evolution of points and the formation of clusters. The generated portraits reveal a complex nature, which is further analyzed in terms of the emerging patterns. The results show that the adoption of dimensionality reduction and visualization tools for processing complex data is a key modeling option with the current computational resources.

]]>Entropy doi: 10.3390/e23050599

Authors: Danilo Santos Cruz João M. de Araújo Carlos A. N. da Costa Carlos C. N. da Silva

Full waveform inversion is an advantageous technique for obtaining high-resolution subsurface information. In the petroleum industry, mainly in reservoir characterisation, it is common to use information from wells as previous information to decrease the ambiguity of the obtained results. For this, we propose adding a relative entropy term to the formalism of the full waveform inversion. In this context, entropy will be just a nomenclature for regularisation and will have the role of helping the converge to the global minimum. The application of entropy in inverse problems usually involves formulating the problem, so that it is possible to use statistical concepts. To avoid this step, we propose a deterministic application to the full waveform inversion. We will discuss some aspects of relative entropy and show three different ways of using them to add prior information through entropy in the inverse problem. We use a dynamic weighting scheme to add prior information through entropy. The idea is that the prior information can help to find the path of the global minimum at the beginning of the inversion process. In all cases, the prior information can be incorporated very quickly into the full waveform inversion and lead the inversion to the desired solution. When we include the logarithmic weighting that constitutes entropy to the inverse problem, we will suppress the low-intensity ripples and sharpen the point events. Thus, the addition of entropy relative to full waveform inversion can provide a result with better resolution. In regions where salt is present in the BP 2004 model, we obtained a significant improvement by adding prior information through the relative entropy for synthetic data. We will show that the prior information added through entropy in full-waveform inversion formalism will prove to be a way to avoid local minimums.

]]>Entropy doi: 10.3390/e23050598

Authors: Lin Wang Ronghua Shi Jian Dong

The dragonfly algorithm (DA) is a new intelligent algorithm based on the theory of dragonfly foraging and evading predators. DA exhibits excellent performance in solving multimodal continuous functions and engineering problems. To make this algorithm work in the binary space, this paper introduces an angle modulation mechanism on DA (called AMDA) to generate bit strings, that is, to give alternative solutions to binary problems, and uses DA to optimize the coefficients of the trigonometric function. Further, to improve the algorithm stability and convergence speed, an improved AMDA, called IAMDA, is proposed by adding one more coefficient to adjust the vertical displacement of the cosine part of the original generating function. To test the performance of IAMDA and AMDA, 12 zero-one knapsack problems are considered along with 13 classic benchmark functions. Experimental results prove that IAMDA has a superior convergence speed and solution quality as compared to other algorithms.

]]>Entropy doi: 10.3390/e23050597

Authors: Michael Kreshchuk Shaoyang Jia William M. Kirby Gary Goldstein James P. Vary Peter J. Love

We present a quantum algorithm for simulation of quantum field theory in the light-front formulation and demonstrate how existing quantum devices can be used to study the structure of bound states in relativistic nuclear physics. Specifically, we apply the Variational Quantum Eigensolver algorithm to find the ground state of the light-front Hamiltonian obtained within the Basis Light-Front Quantization (BLFQ) framework. The BLFQ formulation of quantum field theory allows one to readily import techniques developed for digital quantum simulation of quantum chemistry. This provides a method that can be scaled up to simulation of full, relativistic quantum field theories in the quantum advantage regime. As an illustration, we calculate the mass, mass radius, decay constant, electromagnetic form factor, and charge radius of the pion on the IBM Vigo chip. This is the first time that the light-front approach to quantum field theory has been used to enable simulation of a real physical system on a quantum computer.

]]>Entropy doi: 10.3390/e23050596

Authors: Kia Dashtipour Mandar Gogate Ahsan Adeel Hadi Larijani Amir Hussain

Sentiment analysis aims to automatically classify the subject’s sentiment (e.g., positive, negative, or neutral) towards a particular aspect such as a topic, product, movie, news, etc. Deep learning has recently emerged as a powerful machine learning technique to tackle the growing demand for accurate sentiment analysis. However, the majority of research efforts are devoted to English-language only, while information of great importance is also available in other languages. This paper presents a novel, context-aware, deep-learning-driven, Persian sentiment analysis approach. Specifically, the proposed deep-learning-driven automated feature-engineering approach classifies Persian movie reviews as having positive or negative sentiments. Two deep learning algorithms, convolutional neural networks (CNN) and long-short-term memory (LSTM), are applied and compared with our previously proposed manual-feature-engineering-driven, SVM-based approach. Simulation results demonstrate that LSTM obtained a better performance as compared to multilayer perceptron (MLP), autoencoder, support vector machine (SVM), logistic regression and CNN algorithms.

]]>Entropy doi: 10.3390/e23050595

Authors: Felix Thiel Itay Mualem David Kessler Eli Barkai

A classical random walker starting on a node of a finite graph will always reach any other node since the search is ergodic, namely it fully explores space, hence the arrival probability is unity. For quantum walks, destructive interference may induce effectively non-ergodic features in such search processes. Under repeated projective local measurements, made on a target state, the final detection of the system is not guaranteed since the Hilbert space is split into a bright subspace and an orthogonal dark one. Using this we find an uncertainty relation for the deviations of the detection probability from its classical counterpart, in terms of the energy fluctuations.

]]>Entropy doi: 10.3390/e23050594

Authors: Fushing Hsieh Elizabeth P. Chou Ting-Li Chen

We develop Categorical Exploratory Data Analysis (CEDA) with mimicking to explore and exhibit the complexity of information content that is contained within any data matrix: categorical, discrete, or continuous. Such complexity is shown through visible and explainable serial multiscale structural dependency with heterogeneity. CEDA is developed upon all features’ categorical nature via histogram and it is guided by all features’ associative patterns (order-2 dependence) in a mutual conditional entropy matrix. Higher-order structural dependency of k(≥3) features is exhibited through block patterns within heatmaps that are constructed by permuting contingency-kD-lattices of counts. By growing k, the resultant heatmap series contains global and large scales of structural dependency that constitute the data matrix’s information content. When involving continuous features, the principal component analysis (PCA) extracts fine-scale information content from each block in the final heatmap. Our mimicking protocol coherently simulates this heatmap series by preserving global-to-fine scales structural dependency. Upon every step of mimicking process, each accepted simulated heatmap is subject to constraints with respect to all of the reliable observed categorical patterns. For reliability and robustness in sciences, CEDA with mimicking enhances data visualization by revealing deterministic and stochastic structures within each scale-specific structural dependency. For inferences in Machine Learning (ML) and Statistics, it clarifies, upon which scales, which covariate feature-groups have major-vs.-minor predictive powers on response features. For the social justice of Artificial Intelligence (AI) products, it checks whether a data matrix incompletely prescribes the targeted system.

]]>Entropy doi: 10.3390/e23050593

Authors: Simona Tripaldi Luciano Telesca Michele Lovallo

Temperature and composition at fumaroles are controlled by several volcanic and exogenous processes that operate on various time-space scales. Here, we analyze fluctuations of temperature and chemical composition recorded at fumarolic vents in Solfatara (Campi Flegrei caldera, Italy) from December 1997 to December 2015, in order to better understand source(s) and driving processes. Applying the singular spectral analysis, we found that the trends explain the great part of the variance of the geochemical series but not of the temperature series. On the other hand, a common source, also shared by other geo-indicators (ground deformation, seismicity, hydrogeological and meteorological data), seems to be linked with the oscillatory structure of the investigated signals. The informational characteristics of temperature and geochemical compositions, analyzed by using the Fisher–Shannon method, appear to be a sort of fingerprint of the different periodic structure. In fact, the oscillatory components were characterized by a wide range of significant periodicities nearly equally powerful that show a higher degree of entropy, indicating that changes are influenced by overlapped processes occurring at different scales with a rather similar intensity. The present study represents an advancement in the understanding of the dominant driving mechanisms of volcanic signals at fumaroles that might be also valid for other volcanic areas.

]]>Entropy doi: 10.3390/e23050592

Authors: Maria Rubega Emanuela Formaggio Franco Molteni Eleonora Guanziroli Roberto Di Marco Claudio Baracchini Mario Ermani Nick S. Ward Stefano Masiero Alessandra Del Felice

Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms applied to electroencephalographic (EEG) signals may track brain impairment after stroke. Sixteen stroke survivors were studied in the hyperacute (&lt;48 h) and in the acute phase (∼1 week after stroke), and 35 stroke survivors during the early subacute phase (from 8 days to 32 days and after ∼2 months after stroke): We compared resting-state EEG fractal changes using fractal measures (i.e., Higuchi Index, Tortuosity) with 11 healthy controls. Both Higuchi index and Tortuosity values were significantly lower after a stroke throughout the acute and early subacute stage compared to healthy subjects, reflecting a brain activity which is significantly less complex. These indices may be promising metrics to track behavioral changes in the very early stage after stroke. Our findings might contribute to the neurorehabilitation quest in identifying reliable biomarkers for a better tailoring of rehabilitation pathways.

]]>Entropy doi: 10.3390/e23050590

Authors: Lieneke Kusters Frans M. J. Willems

We present a new Multiple-Observations (MO) helper data scheme for secret-key binding to an SRAM-PUF. This MO scheme binds a single key to multiple enrollment observations of the SRAM-PUF. Performance is improved in comparison to classic schemes which generate helper data based on a single enrollment observation. The performance increase can be explained by the fact that the reliabilities of the different SRAM cells are modeled (implicitly) in the helper data. We prove that the scheme achieves secret-key capacity for any number of enrollment observations, and, therefore, it is optimal. We evaluate performance of the scheme using Monte Carlo simulations, where an off-the-shelf LDPC code is used to implement the linear error-correcting code. Another scheme that models the reliabilities of the SRAM cells is the so-called Soft-Decision (SD) helper data scheme. The SD scheme considers the one-probabilities of the SRAM cells as an input, which in practice are not observable. We present a new strategy for the SD scheme that considers the binary SRAM-PUF observations as an input instead and show that the new strategy is optimal and achieves the same reconstruction performance as the MO scheme. Finally, we present a variation on the MO helper data scheme that updates the helper data sequentially after each successful reconstruction of the key. As a result, the error-correcting performance of the scheme is improved over time.

]]>Entropy doi: 10.3390/e23050591

Authors: Liangliang Li Hongbing Ma

Multimodal medical image fusion aims to fuse images with complementary multisource information. In this paper, we propose a novel multimodal medical image fusion method using pulse coupled neural network (PCNN) and a weighted sum of eight-neighborhood-based modified Laplacian (WSEML) integrating guided image filtering (GIF) in non-subsampled contourlet transform (NSCT) domain. Firstly, the source images are decomposed by NSCT, several low- and high-frequency sub-bands are generated. Secondly, the PCNN-based fusion rule is used to process the low-frequency components, and the GIF-WSEML fusion model is used to process the high-frequency components. Finally, the fused image is obtained by integrating the fused low- and high-frequency sub-bands. The experimental results demonstrate that the proposed method can achieve better performance in terms of multimodal medical image fusion. The proposed algorithm also has obvious advantages in objective evaluation indexes VIFF, QW, API, SD, EN and time consumption.

]]>Entropy doi: 10.3390/e23050589

Authors: Laura Felline

At the basis of the problem of explaining non-local quantum correlations lies the tension between two factors: on the one hand, the natural interpretation of correlations as the manifestation of a causal relation; on the other, the resistance on the part of the physics underlying said correlations to adjust to the most essential features of a pre-theoretic notion of causation. In this paper, I argue for the rejection of the first horn of the dilemma, i.e., the assumption that quantum correlations call for a causal explanation. The paper is divided into two parts. The first, destructive, part provides a critical overview of the enterprise of causally interpreting non-local quantum correlations, with the aim of warning against the temptation of an account of causation claiming to cover such correlations ‘for free’. The second, constructive, part introduces the so-called structural explanation (a variety of non-causal explanation that shows how the explanandum is the manifestation of a fundamental structure of the world) and argues that quantum correlations might be explained structurally in the context of an information-theoretic approach to QT.

]]>Entropy doi: 10.3390/e23050588

Authors: Zhaolong Zheng Hao Ma Weichao Yan Haoyang Liu Zaiyue Yang

Although commercial motion-capture systems have been widely used in various applications, the complex setup limits their application scenarios for ordinary consumers. To overcome the drawbacks of wearability, human posture reconstruction based on a few wearable sensors have been actively studied in recent years. In this paper, we propose a deep-learning-based sparse inertial sensor human posture reconstruction method. This method uses bidirectional recurrent neural network (Bi-RNN) to build an a priori model from a large motion dataset to build human motion, thereby the low-dimensional motion measurements are mapped to whole-body posture. To improve the motion reconstruction performance for specific application scenarios, two fundamental problems in the model construction are investigated: training data selection and sparse sensor placement. The problem of deep-learning training data selection is to select independent and identically distributed (IID) data for a certain scenario from the accumulated imbalanced motion dataset with sufficient information. We formulate the data selection into an optimization problem to obtain continuous and IID data segments, which comply with a small reference dataset collected from the target scenario. A two-step heuristic algorithm is proposed to solve the data selection problem. On the other hand, the optimal sensor placement problem is studied to exploit most information from partial observation of human movement. A method for evaluating the motion information amount of any group of wearable inertial sensors based on mutual information is proposed, and a greedy searching method is adopted to obtain the approximate optimal sensor placement of a given sensor number, so that the maximum motion information and minimum redundancy is achieved. Finally, the human posture reconstruction performance is evaluated with different training data and sensor placement selection methods, and experimental results show that the proposed method takes advantages in both posture reconstruction accuracy and model training time. In the 6 sensors configuration, the posture reconstruction errors of our model for walking, running, and playing basketball are 7.25, 8.84, and 14.13, respectively.

]]>Entropy doi: 10.3390/e23050587

Authors: Matteo Carrega Joonho Kim Dario Rosa

In this paper, we study non-equilibrium dynamics induced by a sudden quench of strongly correlated Hamiltonians with all-to-all interactions. By relying on a Sachdev-Ye-Kitaev (SYK)-based quench protocol, we show that the time evolution of simple spin-spin correlation functions is highly sensitive to the degree of k-locality of the corresponding operators, once an appropriate set of fundamental fields is identified. By tracking the time-evolution of specific spin-spin correlation functions and their decay, we argue that it is possible to distinguish between operator-hopping and operator growth dynamics; the latter being a hallmark of quantum chaos in many-body quantum systems. Such an observation, in turn, could constitute a promising tool to probe the emergence of chaotic behavior, rather accessible in state-of-the-art quench setups.

]]>Entropy doi: 10.3390/e23050586

Authors: Marcin Sosnowski Jaroslaw Krzywanski Radomír Ščurek

Based on the increased attention, the Special Issue aims to investigate the modeling of complex systems using artificial intelligence and computational methods [...]

]]>Entropy doi: 10.3390/e23050585

Authors: Nkosinathi Dlamini Santi Prestipino Giuseppe Pellicane

We study self-assembly on a spherical surface of a model for a binary mixture of amphiphilic dimers in the presence of guest particles via Monte Carlo (MC) computer simulation. All particles had a hard core, but one monomer of the dimer also interacted with the guest particle by means of a short-range attractive potential. We observed the formation of aggregates of various shapes as a function of the composition of the mixture and of the size of guest particles. Our MC simulations are a further step towards a microscopic understanding of experiments on colloidal aggregation over curved surfaces, such as oil droplets.

]]>Entropy doi: 10.3390/e23050584

Authors: Oded Shor Felix Benninger Andrei Khrennikov

A proposal for a fundamental theory is described in which classical and quantum physics as a representation of the universe as a gigantic dendrogram are unified. The latter is the explicate order structure corresponding to the purely number-theoretical implicate order structure given by p-adic numbers. This number field was zero-dimensional, totally disconnected, and disordered. Physical systems (such as electrons, photons) are sub-dendrograms of the universal dendrogram. Measurement process is described as interactions among dendrograms; in particular, quantum measurement problems can be resolved using this process. The theory is realistic, but realism is expressed via the the Leibniz principle of the Identity of Indiscernibles. The classical-quantum interplay is based on the degree of indistinguishability between dendrograms (in which the ergodicity assumption is removed). Depending on this degree, some physical quantities behave more or less in a quantum manner (versus classic manner). Conceptually, our theory is very close to Smolin’s dynamics of difference and Rovelli’s relational quantum mechanics. The presence of classical behavior in nature implies a finiteness of the Universe-dendrogram. (Infinite Universe is considered to be purely quantum.) Reconstruction of events in a four-dimensional space type is based on the holographic principle. Our model reproduces Bell-type correlations in the dendrogramic framework. By adjusting dendrogram complexity, violation of the Bell inequality can be made larger or smaller.

]]>Entropy doi: 10.3390/e23050583

Authors: Pavel Kraikivski

Random fluctuations in neuronal processes may contribute to variability in perception and increase the information capacity of neuronal networks. Various sources of random processes have been characterized in the nervous system on different levels. However, in the context of neural correlates of consciousness, the robustness of mechanisms of conscious perception against inherent noise in neural dynamical systems is poorly understood. In this paper, a stochastic model is developed to study the implications of noise on dynamical systems that mimic neural correlates of consciousness. We computed power spectral densities and spectral entropy values for dynamical systems that contain a number of mutually connected processes. Interestingly, we found that spectral entropy decreases linearly as the number of processes within the system doubles. Further, power spectral density frequencies shift to higher values as system size increases, revealing an increasing impact of negative feedback loops and regulations on the dynamics of larger systems. Overall, our stochastic modeling and analysis results reveal that large dynamical systems of mutually connected and negatively regulated processes are more robust against inherent noise than small systems.

]]>Entropy doi: 10.3390/e23050581

Authors: Jaromir Tosiek Maciej Przanowski

We focus on several questions arising during the modelling of quantum systems on a phase space. First, we discuss the choice of phase space and its structure. We include an interesting case of discrete phase space. Then, we introduce the respective algebras of functions containing quantum observables. We also consider the possibility of performing strict calculations and indicate cases where only formal considerations can be performed. We analyse alternative realisations of strict and formal calculi, which are determined by different kernels. Finally, two classes of Wigner functions as representations of states are investigated.

]]>Entropy doi: 10.3390/e23050582

Authors: Gang Li Hong-Dong Ma Rong-Yue Liu Meng-Di Shen Ke-Xin Zhang

Background: the credit scoring model is an effective tool for banks and other financial institutions to distinguish potential default borrowers. The credit scoring model represented by machine learning methods such as deep learning performs well in terms of the accuracy of default discrimination, but the model itself also has many shortcomings such as many hyperparameters and large dependence on big data. There is still a lot of room to improve its interpretability and robustness. Methods: the deep forest or multi-Grained Cascade Forest (gcForest) is a decision tree depth model based on the random forest algorithm. Using multidimensional scanning and cascading processing, gcForest can effectively identify and process high-dimensional feature information. At the same time, gcForest has fewer hyperparameters and has strong robustness. So, this paper constructs a two-stage hybrid default discrimination model based on multiple feature selection methods and gcForest algorithm, and at the same time, it optimizes the parameters for the lowest type II error as the first principle, and the highest AUC and accuracy as the second and third principles. GcForest can not only reflect the advantages of traditional statistical models in terms of interpretability and robustness but also take into account the advantages of deep learning models in terms of accuracy. Results: the validity of the hybrid default discrimination model is verified by three real open credit data sets of Australian, Japanese, and German in the UCI database. Conclusions: the performance of the gcForest is better than the current popular single classifiers such as ANN, and the common ensemble classifiers such as LightGBM, and CNNs in type II error, AUC, and accuracy. Besides, in comparison with other similar research results, the robustness and effectiveness of this model are further verified.

]]>Entropy doi: 10.3390/e23050580

Authors: Carmen Moret-Tatay David García-Ramos Begoña Sáiz-Mauleón Daniel Gamermann Cyril Bertheaux Céline Borg

The face is a fundamental feature of our identity. In humans, the existence of specialized processing modules for faces is now widely accepted. However, identifying the processes involved for proper names is more problematic. The aim of the present study is to examine which of the two treatments is produced earlier and whether the social abilities are influent. We selected 100 university students divided into two groups: Spanish and USA students. They had to recognize famous faces or names by using a masked priming task. An analysis of variance about the reaction times (RT) was used to determine whether significant differences could be observed in word or face recognition and between the Spanish or USA group. Additionally, and to examine the role of outliers, the Gaussian distribution has been modified exponentially. Famous faces were recognized faster than names, and differences were observed between Spanish and North American participants, but not for unknown distracting faces. The current results suggest that response times to face processing might be faster than name recognition, which supports the idea of differences in processing nature.

]]>Entropy doi: 10.3390/e23050579

Authors: Agustin Pérez-Madrid Ivan Santamaría-Holek

We present a novel theoretical approach to the problem of light energy conversion in thermostated semiconductor junctions. Using the classical model of a two-level atom, we deduced formulas for the spectral response and the quantum efficiency in terms of the input photons’ non-zero chemical potential. We also calculated the spectral entropy production and the global efficiency parameter in the thermodynamic limit. The heat transferred to the thermostat results in a dissipative loss that appreciably controls the spectral quantities’ behavior and, therefore, the cell’s performance. The application of the obtained formulas to data extracted from photovoltaic cells enabled us to accurately interpolate experimental data for the spectral response and the quantum efficiency of cells based on Si-, GaAs, and CdTe, among others.

]]>Entropy doi: 10.3390/e23050578

Authors: Yuhui Shi Yashuang Deng

Dynamical degradation occurs when chaotic systems are implemented on digital devices, which seriously threatens the security of chaos-based cryptosystems. The existing solutions mainly focus on the compensation of dynamical properties rather than on the elimination of the inherent biases of chaotic systems. In this paper, a unidirectional hybrid control method is proposed to improve the dynamical properties and to eliminate the biases of digital chaotic maps. A continuous chaotic system is introduced to provide external feedback control of the given digital chaotic map. Three different control modes are investigated, and the influence of control parameter on the properties of the controlled system is discussed. The experimental results show that the proposed method can not only improve the dynamical degradation of the digital chaotic map but also make the controlled digital system produce outputs with desirable performances. Finally, a pseudorandom number generator (PRNG) is proposed. Statistical analysis shows that the PRNG has good randomness and almost ideal entropy values.

]]>Entropy doi: 10.3390/e23050577

Authors: Tzu-Chuen Lu Ping-Chung Yang Biswapati Jana

In 2018, Tseng et al. proposed a dual-image reversible embedding method based on the modified Least Significant Bit matching (LSB matching) method. This method improved on the dual-image LSB matching method proposed by Lu et al. In Lu et al.’s scheme, there are seven situations that cannot be restored and need to be modified. Furthermore, the scheme uses two pixels to conceal four secret bits. The maximum modification of each pixel, in Lu et al.’s scheme, is two. To decrease the modification, Tseng et al. use one pixel to embed two secret bits and allow the maximum modification to decrease from two to one such that the image quality can be improved. This study enhances Tseng et al.’s method by re-encoding the modified rule table based on the probability of each hiding combination. The scheme analyzes the frequency occurrence of each combination and sets the lowest modified codes to the highest frequency case to significantly reduce the amount of modification. Experimental results show that better image quality is obtained using our method under the same amount of hiding payload.

]]>Entropy doi: 10.3390/e23050576

Authors: Ernesto Sanz Antonio Saa-Requejo Carlos H. Díaz-Ambrona Margarita Ruiz-Ramos Alfredo Rodríguez Eva Iglesias Paloma Esteve Bárbara Soriano Ana M. Tarquis

Estimates suggest that more than 70% of the world’s rangelands are degraded. The Normalized Difference Vegetation Index (NDVI) is commonly used by ecologists and agriculturalists to monitor vegetation and contribute to more sustainable rangeland management. This paper aims to explore the scaling character of NDVI and NDVI anomaly (NDVIa) time series by applying three fractal analyses: generalized structure function (GSF), multifractal detrended fluctuation analysis (MF-DFA), and Hurst index (HI). The study was conducted in four study areas in Southeastern Spain. Results suggest a multifractal character influenced by different land uses and spatial diversity. MF-DFA indicated an antipersistent character in study areas, while GSF and HI results indicated a persistent character. Different behaviors of generalized Hurst and scaling exponents were found between herbaceous and tree dominated areas. MF-DFA and surrogate and shuffle series allow us to study multifractal sources, reflecting the importance of long-range correlations in these areas. Two types of long-range correlation appear to be in place due to short-term memory reflecting seasonality and longer-term memory based on a time scale of a year or longer. The comparison of these series also provides us with a differentiating profile to distinguish among our four study areas that can improve land use and risk management in arid rangelands.

]]>Entropy doi: 10.3390/e23050575

Authors: Gian Marco Palamara José A. Capitán David Alonso

Functional responses are non-linear functions commonly used to describe the variation in the rate of consumption of resources by a consumer. They have been widely used in both theoretical and empirical studies, but a comprehensive understanding of their parameters at different levels of description remains elusive. Here, by depicting consumers and resources as stochastic systems of interacting particles, we present a minimal set of reactions for consumer resource dynamics. We rigorously derived the corresponding system of ODEs, from which we obtained via asymptotic expansions classical 2D consumer-resource dynamics, characterized by different functional responses. We also derived functional responses by focusing on the subset of reactions describing only the feeding process. This involves fixing the total number of consumers and resources, which we call chemostatic conditions. By comparing these two ways of deriving functional responses, we showed that classical functional response parameters in effective 2D consumer-resource dynamics differ from the same parameters obtained by measuring (or deriving) functional responses for typical feeding experiments under chemostatic conditions, which points to potential errors in interpreting empirical data. We finally discuss possible generalizations of our models to systems with multiple consumers and more complex population structures, including spatial dynamics. Our stochastic approach builds on fundamental ecological processes and has natural connections to basic ecological theory.

]]>Entropy doi: 10.3390/e23050574

Authors: Chendong Xu Weigang Wang Yunwei Zhang Jie Qin Shujuan Yu Yun Zhang

With the increasing demand of location-based services, neural network (NN)-based intelligent indoor localization has attracted great interest due to its high localization accuracy. However, deep NNs are usually affected by degradation and gradient vanishing. To fill this gap, we propose a novel indoor localization system, including denoising NN and residual network (ResNet), to predict the location of moving object by the channel state information (CSI). In the ResNet, to prevent overfitting, we replace all the residual blocks by the stochastic residual blocks. Specially, we explore the long-range stochastic shortcut connection (LRSSC) to solve the degradation problem and gradient vanishing. To obtain a large receptive field without losing information, we leverage the dilated convolution at the rear of the ResNet. Experimental results are presented to confirm that our system outperforms state-of-the-art methods in a representative indoor environment.

]]>Entropy doi: 10.3390/e23050573

Authors: Alexey V. Melkikh

Quantum entanglement can cause the efficiency of a heat engine to be greater than the efficiency of the Carnot cycle. However, this does not mean a violation of the second law of thermodynamics, since there is no local equilibrium for pure quantum states, and, in the absence of local equilibrium, thermodynamics cannot be formulated correctly. Von Neumann entropy is not a thermodynamic quantity, although it can characterize the ordering of a system. In the case of the entanglement of the particles of the system with the environment, the concept of an isolated system should be refined. In any case, quantum correlations cannot lead to a violation of the second law of thermodynamics in any of its formulations. This article is devoted to a technical discussion of the expected results on the role of quantum entanglement in thermodynamics.

]]>Entropy doi: 10.3390/e23050572

Authors: Guiyun Liu Zhimin Peng Zhongwei Liang Junqiang Li Lefeng Cheng

Virus spreading problems in wireless rechargeable sensor networks (WSNs) are becoming a hot topic, and the problem has been studied and discussed in recent years. Many epidemic spreading models have been introduced for revealing how a virus spreads and how a virus is suppressed. However, most of them assumed the sensors are not rechargeable sensors. In addition, most of existing works do not consider virus mutation problems. This paper proposes a novel epidemic model, including susceptible, infected, variant, low-energy and dead states, which considers the rechargeable sensors and the virus mutation factor. The stability of the proposed model is first analyzed by adopting the characteristic equation and constructing Lyapunov functions methods. Then, an optimal control problem is formulated to control the virus spread and decrease the cost of the networks by applying Pontryagin’s maximum principle. Finally, all of the theoretical results are confirmed by numerical simulation.

]]>Entropy doi: 10.3390/e23050571

Authors: Michael A. Wilson Andrew Pohorille

We use stochastic simulations to investigate the performance of two recently developed methods for calculating the free energy profiles of ion channels and their electrophysiological properties, such as current–voltage dependence and reversal potential, from molecular dynamics simulations at a single applied voltage. These methods require neither knowledge of the diffusivity nor simulations at multiple voltages, which greatly reduces the computational effort required to probe the electrophysiological properties of ion channels. They can be used to determine the free energy profiles from either forward or backward one-sided properties of ions in the channel, such as ion fluxes, density profiles, committor probabilities, or from their two-sided combination. By generating large sets of stochastic trajectories, which are individually designed to mimic the molecular dynamics crossing statistics of models of channels of trichotoxin, p7 from hepatitis C and a bacterial homolog of the pentameric ligand-gated ion channel, GLIC, we find that the free energy profiles obtained from stochastic simulations corresponding to molecular dynamics simulations of even a modest length are burdened with statistical errors of only 0.3 kcal/mol. Even with many crossing events, applying two-sided formulas substantially reduces statistical errors compared to one-sided formulas. With a properly chosen reference voltage, the current–voltage curves can be reproduced with good accuracy from simulations at a single voltage in a range extending for over 200 mV. If possible, the reference voltages should be chosen not simply to drive a large current in one direction, but to observe crossing events in both directions.

]]>Entropy doi: 10.3390/e23050570

Authors: Mengna Shi Shiyu Guo Xiaomeng Song Yanqi Zhou Erfu Wang

The network security transmission of digital images needs to solve the dual security problems of content and appearance. In this paper, a visually secure image compression and encryption scheme is proposed by combining compressed sensing (CS) and regional energy. The plain image is compressed and encrypted into a secret image by CS and zigzag confusion. Then, according to the regional energy, the secret image is embedded into a carrier image to obtain the final visual secure cipher image. A method of hour hand printing (HHP) scrambling is proposed to increase the pixel irrelevance. Regional energy embedding reduce the damage to the visual quality of carrier image, and the different embedding positions between images greatly enhances the security of the encryption algorithm. Furthermore, the hyperchaotic multi-character system (MCS) is utilized to construct measurement matrix and control pixels. Simulation results and security analyses demonstrate the effectiveness, security and robustness of the propose algorithm.

]]>Entropy doi: 10.3390/e23050569

Authors: Arezou Rezazadeh Josep Font-Segura Alfonso Martinez Albert Guillén i Fàbregas

This paper studies a generalized version of multi-class cost-constrained random-coding ensemble with multiple auxiliary costs for the transmission of N correlated sources over an N-user multiple-access channel. For each user, the set of messages is partitioned into classes and codebooks are generated according to a distribution depending on the class index of the source message and under the constraint that the codewords satisfy a set of cost functions. Proper choices of the cost functions recover different coding schemes including message-dependent and message-independent versions of independent and identically distributed, independent conditionally distributed, constant-composition and conditional constant composition ensembles. The transmissibility region of the scheme is related to the Cover-El Gamal-Salehi region. A related family of correlated-source Gallager source exponent functions is also studied. The achievable exponents are compared for correlated and independent sources, both numerically and analytically.

]]>Entropy doi: 10.3390/e23050568

Authors: Joanna Olbryś Krzysztof Ostrowski

The aim of this study is to investigate market depth as a stock market liquidity dimension. A new methodology for market depth measurement exactly based on Shannon information entropy for high-frequency data is introduced and utilized. The proposed entropy-based market depth indicator is supported by an algorithm inferring the initiator of a trade. This new indicator seems to be a promising liquidity measure. Both market entropy and market liquidity can be directly measured by the new indicator. The findings of empirical experiments for real-data with a time stamp rounded to the nearest second from the Warsaw Stock Exchange (WSE) confirm that the new proxy enables us to effectively compare market depth and liquidity for different equities. Robustness tests and statistical analyses are conducted. Furthermore, an intra-day seasonality assessment is provided. Results indicate that the entropy-based approach can be considered as an auspicious market depth and liquidity proxy with an intuitive base for both theoretical and empirical analyses in financial markets.

]]>Entropy doi: 10.3390/e23050567

Authors: Xudong Jiang Yihao Tang Zhaohui Liu Venkat Raman

When operating under lean fuel–air conditions, flame flashback is an operational safety issue in stationary gas turbines. In particular, with the increased use of hydrogen, the propagation of the flame through the boundary layers into the mixing section becomes feasible. Typically, these mixing regions are not designed to hold a high-temperature flame and can lead to catastrophic failure of the gas turbine. Flame flashback along the boundary layers is a competition between chemical reactions in a turbulent flow, where fuel and air are incompletely mixed, and heat loss to the wall that promotes flame quenching. The focus of this work is to develop a comprehensive simulation approach to model boundary layer flashback, accounting for fuel–air stratification and wall heat loss. A large eddy simulation (LES) based framework is used, along with a tabulation-based combustion model. Different approaches to tabulation and the effect of wall heat loss are studied. An experimental flashback configuration is used to understand the predictive accuracy of the models. It is shown that diffusion-flame-based tabulation methods are better suited due to the flashback occurring in relatively low-strain and lean fuel–air mixtures. Further, the flashback is promoted by the formation of features such as flame tongues, which induce negative velocity separated boundary layer flow that promotes upstream flame motion. The wall heat loss alters the strength of these separated flows, which in turn affects the flashback propensity. Comparisons with experimental data for both non-reacting cases that quantify fuel–air mixing and reacting flashback cases are used to demonstrate predictive accuracy.

]]>Entropy doi: 10.3390/e23050566

Authors: Xiaoqiang Chi Yang Xiang

Paraphrase generation is an important yet challenging task in natural language processing. Neural network-based approaches have achieved remarkable success in sequence-to-sequence learning. Previous paraphrase generation work generally ignores syntactic information regardless of its availability, with the assumption that neural nets could learn such linguistic knowledge implicitly. In this work, we make an endeavor to probe into the efficacy of explicit syntactic information for the task of paraphrase generation. Syntactic information can appear in the form of dependency trees, which could be easily acquired from off-the-shelf syntactic parsers. Such tree structures could be conveniently encoded via graph convolutional networks to obtain more meaningful sentence representations, which could improve generated paraphrases. Through extensive experiments on four paraphrase datasets with different sizes and genres, we demonstrate the utility of syntactic information in neural paraphrase generation under the framework of sequence-to-sequence modeling. Specifically, our graph convolutional network-enhanced models consistently outperform their syntax-agnostic counterparts using multiple evaluation metrics.

]]>Entropy doi: 10.3390/e23050565

Authors: Yuanbin Fu Jiayi Ma Xiaojie Guo

Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of blocks configured by shared parameters, and the rest by varied parameters exported by an exemplar-domain aware parameter network, for explicitly imitating the functionalities of extraction and mapping. The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically to different purposes (domains and exemplars). In addition, a discriminator is equipped during the training phase to guarantee the output satisfying the distribution of the target domain. Our EDIT can flexibly and effectively work on multiple domains and arbitrary exemplars in a unified neat model. We conduct experiments to show the efficacy of our design, and reveal its advances over other state-of-the-art methods both quantitatively and qualitatively.

]]>Entropy doi: 10.3390/e23050564

Authors: Jialiang Huang Xiaoxia Wang Yuxi Luo Liying Yu Ziyuan Zhang

In order to explore the impact of a manufacturer’s or retailer’s undertaking corporate social responsibility (CSR) and different power structures on their joint green marketing decisions and profits in the green supply chain, this paper establishes green supply chain optimization models under six different decision-making scenarios according to two different CSR bearers and three different power structures. Based on the main assumptions of a linear product demand function and CSR measured by consumer surplus, this paper solves the equilibrium solutions of the manufacturer and the retailer through game theory. The results show that: First, the difference in the degree of CSR undertaken by manufacturers and retailers leads to a difference in the ranking of optimal strategies of both parties under the three power structures. Second, under the same power structure, compared with undertaking CSR by oneself, when the other party undertakes CSR, the level of the product’s green degree, the level of green promotion, the party’s own profit, and the profit of the other party are all higher. Third, regardless of the power structure, manufacturers and retailers undertaking CSR is conducive to improving the level of product greenness, increasing green promotion, lowering the retail price, increasing consumers’ willingness to buy green products, and ultimately helping to increase the profits of manufacturers and retailers.

]]>Entropy doi: 10.3390/e23050563

Authors: Ewa Roszkowska Marta Kusterka-Jefmańska Bartłomiej Jefmański

In the assessment of most complex socioeconomic phenomena with the use of multicriteria methods, continuous data are used, the source of which are most often public statistics. However, there are complex phenomena such as quality of life and quality of services in the assessment, for which questionnaire surveys and ordinal measurement scales are used. In this case, the use of classic multicriteria methods is very difficult, taking into account the way of presenting this type of data by official statistics, as well as their permissible transformations and arithmetic operations. Therefore, the main purpose of this study was the presentation of a novel framework which can be applied for assessing socioeconomic phenomena on the basis of survey data. It was assumed that the object assessments may contain positive or negative opinions and an element of uncertainty expressed in the form a “no”, “difficult to say”, or “no opinion” answers. For this reason, the intuitionistic fuzzy TOPSIS (IF-TOPSIS) method is proposed. To demonstrate the potential of this solution, the results of measuring the subjective quality of life of the inhabitants of 83 cities in EU countries, EFTA countries, the UK, the Western Balkans, and Turkey are presented. For most cities, a high level of subjective quality of life was observed using the proposed approach. The highest level of quality of life was observed in Zurich, whereas the lowest was observed in Palermo.

]]>Entropy doi: 10.3390/e23050562

Authors: Nasru Minallah Khadem Ullah Jaroslav Frnda Laiq Hasan Jan Nedoma

This article investigates the performance of various sophisticated channel coding and transmission schemes for achieving reliable transmission of a highly compressed video stream. Novel error protection schemes including Non-Convergent Coding (NCC) scheme, Non-Convergent Coding assisted with Differential Space Time Spreading (DSTS) and Sphere Packing (SP) modulation (NCDSTS-SP) scheme and Convergent Coding assisted with DSTS and SP modulation (CDSTS-SP) are analyzed using Bit Error Ratio (BER) and Peak Signal to Noise Ratio (PSNR) performance metrics. Furthermore, error reduction is achieved using sophisticated transceiver comprising SP modulation technique assisted by Differential Space Time Spreading. The performance of the iterative Soft Bit Source Decoding (SBSD) in combination with channel codes is analyzed using various error protection setups by allocating consistent overall bit-rate budget. Additionally, the iterative behavior of SBSD assisted RSC decoder is analyzed with the aid of Extrinsic Information Transfer (EXIT) Chart in order to analyze the achievable turbo cliff of the iterative decoding process. The subjective and objective video quality performance of the proposed error protection schemes is analyzed while employing H.264 advanced video coding and H.265 high efficient video coding standards, while utilizing diverse video sequences having different resolution, motion and dynamism. It was observed that in the presence of noisy channel the low resolution videos outperforms its high resolution counterparts. Furthermore, it was observed that the performance of video sequence with low motion contents and dynamism outperforms relative to video sequence with high motion contents and dynamism. More specifically, it is observed that while utilizing H.265 video coding standard, the Non-Convergent Coding assisted with DSTS and SP modulation scheme with enhanced transmission mechanism results in Eb/N0 gain of 20 dB with reference to the Non-Convergent Coding and transmission mechanism at the objective PSNR value of 42 dB. It is important to mention that both the schemes have employed identical code rate. Furthermore, the Convergent Coding assisted with DSTS and SP modulation mechanism achieved superior performance with reference to the equivalent rate Non-Convergent Coding assisted with DSTS and SP modulation counterpart mechanism, with a performance gain of 16 dB at the objective PSNR grade of 42 dB. Moreover, it is observed that the maximum achievable PSNR gain through H.265 video coding standard is 45 dB, with a PSNR gain of 3 dB with reference to the identical code rate H.264 coding scheme.

]]>Entropy doi: 10.3390/e23050561

Authors: Lianet Contreras Rodríguez Evaristo José Madarro-Capó Carlos Miguel Legón-Pérez Omar Rojas Guillermo Sosa-Gómez

Entropy makes it possible to measure the uncertainty about an information source from the distribution of its output symbols. It is known that the maximum Shannon’s entropy of a discrete source of information is reached when its symbols follow a Uniform distribution. In cryptography, these sources have great applications since they allow for the highest security standards to be reached. In this work, the most effective estimator is selected to estimate entropy in short samples of bytes and bits with maximum entropy. For this, 18 estimators were compared. Results concerning the comparisons published in the literature between these estimators are discussed. The most suitable estimator is determined experimentally, based on its bias, the mean square error short samples of bytes and bits.

]]>Entropy doi: 10.3390/e23050560

Authors: Ayumu Nono Yusuke Uchiyama Kei Nakagawa

Volatility, which represents the magnitude of fluctuating asset prices or returns, is used in the problems of finance to design optimal asset allocations and to calculate the price of derivatives. Since volatility is unobservable, it is identified and estimated by latent variable models known as volatility fluctuation models. Almost all conventional volatility fluctuation models are linear time-series models and thus are difficult to capture nonlinear and/or non-Gaussian properties of volatility dynamics. In this study, we propose an entropy based Student’s t-process Dynamical model (ETPDM) as a volatility fluctuation model combined with both nonlinear dynamics and non-Gaussian noise. The ETPDM estimates its latent variables and intrinsic parameters by a robust particle filtering based on a generalized H-theorem for a relative entropy. To test the performance of the ETPDM, we implement numerical experiments for financial time-series and confirm the robustness for a small number of particles by comparing with the conventional particle filtering.

]]>Entropy doi: 10.3390/e23050559

Authors: Andrés F. Almeida-Ñauñay Rosa María Benito Miguel Quemada Juan Carlos Losada Ana M. Tarquis

Multiple studies revealed that pasture grasslands are a time-varying complex ecological system. Climate variables regulate vegetation growing, being precipitation and temperature the most critical driver factors. This work aims to assess the response of two different Vegetation Indices (VIs) to the temporal dynamics of temperature and precipitation in a semiarid area. Two Mediterranean grasslands zones situated in the center of Spain were selected to accomplish this goal. Correlations and cross-correlations between VI and each climatic variable were computed. Different lagged responses of each VIs series were detected, varying in zones, the year’s season, and the climatic variable. Recurrence Plots (RPs) and Cross Recurrence Plots (CRPs) analyses were applied to characterise and quantify the system’s complexity showed in the cross-correlation analysis. RPs pointed out that short-term predictability and high dimensionality of VIs series, as well as precipitation, characterised this dynamic. Meanwhile, temperature showed a more regular pattern and lower dimensionality. CRPs revealed that precipitation was a critical variable to distinguish between zones due to their complex pattern and influence on the soil’s water balance that the VI reflects. Overall, we prove RP and CRP’s potential as adequate tools for analysing vegetation dynamics characterised by complexity.

]]>Entropy doi: 10.3390/e23050557

Authors: Ionel Jianu Iulia Jianu

This study investigates the conformity to Benford’s Law of the information disclosed in financial statements. Using the first digit test of Benford’s Law, the study analyses the reliability of financial information provided by listed companies on an emerging capital market before and after the implementation of International Financial Reporting Standards (IFRS). The results of the study confirm the increase of reliability on the information disclosed in the financial statements after IFRS implementation. The study contributes to the existing literature by bringing new insights into the types of financial information that do not comply with Benford’s Law such as the amounts determined by estimates or by applying professional judgment.

]]>Entropy doi: 10.3390/e23050558

Authors: Benjamín Toledo Pablo Medina Sylvain Blunier José Rogan Marina Stepanova Juan Alejandro Valdivia

This paper explores the spatial variations of the statistical scaling features of low to high latitude geomagnetic field fluctuations at Swarm altitude. The data for this study comes from the vector field magnetometer onboard Swarm A satellite, measured at low resolution (1 Hz) for one year (from 9 March 2016, to 9 March 2017). We estimated the structure-function scaling exponents using the p-leaders discrete wavelet multifractal technique, from which we obtained the singularity spectrum related to the magnetic fluctuations in the North-East-Center (NEC) coordinate system. From this estimation, we retain just the maximal fractal subset, associated with the Hurst exponent H. Here we present thresholding for two levels of the Auroral Electrojet index and almost the whole northern and southern hemispheres, the Hurst exponent, the structure-function scaling exponent of order 2, and the multifractal p-exponent width for the geomagnetic fluctuations. The latter quantifies the relevance of the multifractal property. Sometimes, we found negative values of H, suggesting a behavior similar to wave breaking or shocklet-like propagating front. Furthermore, we found some asymmetries in the magnetic field turbulence between the northern and southern hemispheres. These estimations suggest that different turbulent regimes of the geomagnetic field fluctuations exist along the Swarm path.

]]>Entropy doi: 10.3390/e23050556

Authors: Katarzyna Kozieł Juliusz Topolnicki Norbert Skoczylas

Gas-induced geodynamic phenomena can occur during underground mining operations if the porous structure of the rock is filled with gas at high pressure. In such cases, the original compact rock structure disintegrates into grains of small dimensions, which are then transported along the mine working space. Such geodynamic events, particularly outbursts of gas and rock, pose a danger both to the life of miners and to the functioning of the mine infrastructure. These incidents are rare in copper ore mining, but they have recently begun to occur, and have not yet been fully investigated. To ensure the safety of mining operations, it is necessary to determine parameters of the rock–gas system for which the energy of the gas will be smaller than the work required to disintegrate and transport the rock. Such a comparison is referred to as an energy balance and serves as a starting point for all engineering analyses. During mining operations, the equilibrium of the rock–gas system is disturbed, and the rapid destruction of the rock is initiated together with sudden decompression of the gas contained in its porous structure. The disintegrated rock is then transported along the mine working space in a stream of released gas. Estimation of the energy of the gas requires investigation of the type of thermodynamic transformation involved in the process. In this case, adiabatic transformation would mean that the gas, cooled in the course of decompression, remains at a temperature significantly lower than that of the surrounding rocks throughout the process. However, if we assume that the transformation is isothermal, then the cooled gas will heat up to the original temperature of the rock in a very short time (&lt;1 s). Because the quantity of energy in the case of isothermal transformation is almost three times as high as in the adiabatic case, obtaining the correct energy balance for gas-induced geodynamic phenomena requires detailed analysis of this question. For this purpose, a unique experimental study was carried out to determine the time required for heat exchange in conditions of very rapid flows of gas around rock grains of different sizes. Numerical simulations reproducing the experiments were also designed. The results of the experiment and the simulation were in good agreement, indicating a very fast rate of heat exchange. Taking account of the parameters of the experiment, the thermodynamic transformation may be considered to be close to isothermal.

]]>Entropy doi: 10.3390/e23050555

Authors: Alejandro Godino-Moya Rosa-María Menchón-Lara Marcos Martín-Fernández Claudia Prieto Carlos Alberola-López

Numerous methods in the extensive literature on magnetic resonance imaging (MRI) reconstruction exploit temporal redundancy to accelerate cardiac cine. Some of them include motion compensation, which involves high computational costs and long runtimes. In this work, we proposed a method—elastic alignedSENSE (EAS)—for the direct reconstruction of a motion-free image plus a set of nonrigid deformations to reconstruct a 2D cardiac sequence. The feasibility of the proposed approach was tested in 2D Cartesian and golden radial multi-coil breath-hold cardiac cine acquisitions. The proposed approach was compared against parallel imaging compressed sense (sPICS) and group-wise motion corrected compressed sense (GWCS) reconstructions. EAS provides better results on objective measures with considerable less runtime when an acceleration factor is higher than 10×. Subjective assessment of an expert, however, invited proposing the combination of EAS and GWCS as a preferable alternative to GWCS or EAS in isolation.

]]>Entropy doi: 10.3390/e23050553

Authors: Salim Miloudi Yulin Wang Wenjia Ding

Clustering algorithms for multi-database mining (MDM) rely on computing (n2−n)/2 pairwise similarities between n multiple databases to generate and evaluate m∈[1,(n2−n)/2] candidate clusterings in order to select the ideal partitioning that optimizes a predefined goodness measure. However, when these pairwise similarities are distributed around the mean value, the clustering algorithm becomes indecisive when choosing what database pairs are considered eligible to be grouped together. Consequently, a trivial result is produced by putting all the n databases in one cluster or by returning n singleton clusters. To tackle the latter problem, we propose a learning algorithm to reduce the fuzziness of the similarity matrix by minimizing a weighted binary entropy loss function via gradient descent and back-propagation. As a result, the learned model will improve the certainty of the clustering algorithm by correctly identifying the optimal database clusters. Additionally, in contrast to gradient-based clustering algorithms, which are sensitive to the choice of the learning rate and require more iterations to converge, we propose a learning-rate-free algorithm to assess the candidate clusterings generated on the fly in fewer upper-bounded iterations. To achieve our goal, we use coordinate descent (CD) and back-propagation to search for the optimal clustering of the n multiple database in a way that minimizes a convex clustering quality measure L(θ) in less than (n2−n)/2 iterations. By using a max-heap data structure within our CD algorithm, we optimally choose the largest weight variable θp,q(i) at each iteration i such that taking the partial derivative of L(θ) with respect to θp,q(i) allows us to attain the next steepest descent minimizing L(θ) without using a learning rate. Through a series of experiments on multiple database samples, we show that our algorithm outperforms the existing clustering algorithms for MDM.

]]>Entropy doi: 10.3390/e23050554

Authors: Maurizio Benfatto Elisabetta Pace Catalina Curceanu Alessandro Scordo Alberto Clozza Ivan Davoli Massimiliano Lucci Roberto Francini Fabio De Matteis Maurizio Grandi Rohisha Tuladhar Paolo Grigolini

We study the emission of photons from germinating seeds using an experimental technique designed to detect light of extremely small intensity. We analyze the dark count signal without germinating seeds as well as the photon emission during the germination process. The technique of analysis adopted here, called diffusion entropy analysis (DEA) and originally designed to measure the temporal complexity of astrophysical, sociological and physiological processes, rests on Kolmogorov complexity. The updated version of DEA used in this paper is designed to determine if the signal complexity is generated either by non-ergodic crucial events with a non-stationary correlation function or by the infinite memory of a stationary but non-integrable correlation function or by a mixture of both processes. We find that dark count yields the ordinary scaling, thereby showing that no complexity of either kinds may occur without any seeds in the chamber. In the presence of seeds in the chamber anomalous scaling emerges, reminiscent of that found in neuro-physiological processes. However, this is a mixture of both processes and with the progress of germination the non-ergodic component tends to vanish and complexity becomes dominated by the stationary infinite memory. We illustrate some conjectures ranging from stress induced annihilation of crucial events to the emergence of quantum coherence.

]]>Entropy doi: 10.3390/e23050552

Authors: Hamid Mousavi Mareike Buhl Enrico Guiraud Jakob Drefs Jörg Lücke

Latent Variable Models (LVMs) are well established tools to accomplish a range of different data processing tasks. Applications exploit the ability of LVMs to identify latent data structure in order to improve data (e.g., through denoising) or to estimate the relation between latent causes and measurements in medical data. In the latter case, LVMs in the form of noisy-OR Bayes nets represent the standard approach to relate binary latents (which represent diseases) to binary observables (which represent symptoms). Bayes nets with binary representation for symptoms may be perceived as a coarse approximation, however. In practice, real disease symptoms can range from absent over mild and intermediate to very severe. Therefore, using diseases/symptoms relations as motivation, we here ask how standard noisy-OR Bayes nets can be generalized to incorporate continuous observables, e.g., variables that model symptom severity in an interval from healthy to pathological. This transition from binary to interval data poses a number of challenges including a transition from a Bernoulli to a Beta distribution to model symptom statistics. While noisy-OR-like approaches are constrained to model how causes determine the observables’ mean values, the use of Beta distributions additionally provides (and also requires) that the causes determine the observables’ variances. To meet the challenges emerging when generalizing from Bernoulli to Beta distributed observables, we investigate a novel LVM that uses a maximum non-linearity to model how the latents determine means and variances of the observables. Given the model and the goal of likelihood maximization, we then leverage recent theoretical results to derive an Expectation Maximization (EM) algorithm for the suggested LVM. We further show how variational EM can be used to efficiently scale the approach to large networks. Experimental results finally illustrate the efficacy of the proposed model using both synthetic and real data sets. Importantly, we show that the model produces reliable results in estimating causes using proofs of concepts and first tests based on real medical data and on images.

]]>Entropy doi: 10.3390/e23050550

Authors: Wasiq Ali Wasim Ullah Khan Muhammad Asif Zahoor Raja Yigang He Yaan Li

In this study, an intelligent computing paradigm built on a nonlinear autoregressive exogenous (NARX) feedback neural network model with the strength of deep learning is presented for accurate state estimation of an underwater passive target. In underwater scenarios, real-time motion parameters of passive objects are usually extracted with nonlinear filtering techniques. In filtering algorithms, nonlinear passive measurements are associated with linear kinetics of the target, governing by state space methodology. To improve tracking accuracy, effective feature estimation and minimizing position error of dynamic passive objects, the strength of NARX based supervised learning is exploited. Dynamic artificial neural networks, which contain tapped delay lines, are suitable for predicting the future state of the underwater passive object. Neural networks-based intelligence computing is effectively applied for estimating the real-time actual state of a passive moving object, which follows a semi-curved path. Performance analysis of NARX based neural networks is evaluated for six different scenarios of standard deviation of white Gaussian measurement noise by following bearings only tracking phenomena. Root mean square error between estimated and real position of the passive target in rectangular coordinates is computed for evaluating the worth of the proposed NARX feedback neural network scheme. The Monte Carlo simulations are conducted and the results certify the capability of the intelligence computing over conventional nonlinear filtering algorithms such as spherical radial cubature Kalman filter and unscented Kalman filter for given state estimation model.

]]>Entropy doi: 10.3390/e23050551

Authors: Takehiro Tottori Tetsuya J. Kobayashi

Decentralized partially observable Markov decision process (DEC-POMDP) models sequential decision making problems by a team of agents. Since the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for the latent variable model, DEC-POMDP can be solved by the EM algorithm. However, in EM for DEC-POMDP, the forward–backward algorithm needs to be calculated up to the infinite horizon, which impairs the computational efficiency. In this paper, we propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by introducing the forward and backward Bellman equations into EM. BEM can be more efficient than EM because BEM calculates the forward and backward Bellman equations instead of the forward–backward algorithm up to the infinite horizon. However, BEM cannot always be more efficient than EM when the size of problems is large because BEM calculates an inverse matrix. We circumvent this shortcoming in MBEM by calculating the forward and backward Bellman equations without the inverse matrix. Our numerical experiments demonstrate that the convergence of MBEM is faster than that of EM.

]]>Entropy doi: 10.3390/e23050549

Authors: Olga V. Man’ko Vladimir I. Man’ko

The review of new formulation of conventional quantum mechanics where the quantum states are identified with probability distributions is presented. The invertible map of density operators and wave functions onto the probability distributions describing the quantum states in quantum mechanics is constructed both for systems with continuous variables and systems with discrete variables by using the Born’s rule and recently suggested method of dequantizer–quantizer operators. Examples of discussed probability representations of qubits (spin-1/2, two-level atoms), harmonic oscillator and free particle are studied in detail. Schrödinger and von Neumann equations, as well as equations for the evolution of open systems, are written in the form of linear classical–like equations for the probability distributions determining the quantum system states. Relations to phase–space representation of quantum states (Wigner functions) with quantum tomography and classical mechanics are elucidated.

]]>Entropy doi: 10.3390/e23050548

Authors: Rachid Bentoumi Farid El Ktaibi Mhamed Mesfioui

We introduce a new family of bivariate exponential distributions based on the counter-monotonic shock model. This family of distribution is easy to simulate and includes the Fréchet lower bound, which allows to span all degrees of negative dependence. The construction and distributional properties of the proposed bivariate distribution are presented along with an estimation of the parameters involved in our model based on the method of moments. A simulation study is carried out to evaluate the performance of the suggested estimators. An extension to the general model describing both negative and positive dependence is sketched in the last section of the paper.

]]>Entropy doi: 10.3390/e23050543

Authors: Agnieszka Bitner Marcin Fialkowski

Quantifying the urbanization level is an essential yet challenging task in urban studies because of the high complexity of this phenomenon. The urbanization degree has been estimated using a variety of social, economic, and spatial measures. Among the spatial characteristics, the Shannon entropy of the landscape pattern has recently been intensively explored as one of the most effective urbanization indexes. Here, we introduce a new measure of the spatial entropy of land that characterizes its parcel mosaic, the structure resulting from the division of land into cadastral parcels. We calculate the entropies of the parcel areas’ distribution function in different portions of the urban systems. We have established that the Shannon and Renyi entropies R0 and R1/2 are most effective at differentiating the degree of a spatial organization of the land. Our studies are based on 30 urban systems located in the USA, Australia, and Poland, and three desert areas from Australia. In all the cities, the entropies behave the same as functions of the distance from the center. They attain the lowest values in the city core and reach substantially higher values in suburban areas. Thus, the parcel mosaic entropies provide a spatial characterization of land to measure its urbanization level effectively.

]]>Entropy doi: 10.3390/e23050547

Authors: Shay Shlisel Monika Pinchas

The probability density function (pdf) valid for the Gaussian case is often applied for describing the convolutional noise pdf in the blind adaptive deconvolution problem, although it is known that it can be applied only at the latter stages of the deconvolution process, where the convolutional noise pdf tends to be approximately Gaussian. Recently, the deconvolutional noise pdf was approximated with the Edgeworth Expansion and with the Maximum Entropy density function for the 16 Quadrature Amplitude Modulation (QAM) input but no equalization performance improvement was seen for the hard channel case with the equalization algorithm based on the Maximum Entropy density function approach for the convolutional noise pdf compared with the original Maximum Entropy algorithm, while for the Edgeworth Expansion approximation technique, additional predefined parameters were needed in the algorithm. In this paper, the Generalized Gaussian density (GGD) function and the Edgeworth Expansion are applied for approximating the convolutional noise pdf for the 16 QAM input case, with no need for additional predefined parameters in the obtained equalization method. Simulation results indicate that improved equalization performance is obtained from the convergence time point of view of approximately 15,000 symbols for the hard channel case with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm. By convergence time, we mean the number of symbols required to reach a residual inter-symbol-interference (ISI) for which reliable decisions can be made on the equalized output sequence.

]]>Entropy doi: 10.3390/e23050546

Authors: Zhenni Li Haoyi Sun Yuliang Gao Jiao Wang

Depth maps obtained through sensors are often unsatisfactory because of their low-resolution and noise interference. In this paper, we propose a real-time depth map enhancement system based on a residual network which uses dual channels to process depth maps and intensity maps respectively and cancels the preprocessing process, and the algorithm proposed can achieve real-time processing speed at more than 30 fps. Furthermore, the FPGA design and implementation for depth sensing is also introduced. In this FPGA design, intensity image and depth image are captured by the dual-camera synchronous acquisition system as the input of neural network. Experiments on various depth map restoration shows our algorithms has better performance than existing LRMC, DE-CNN and DDTF algorithms on standard datasets and has a better depth map super-resolution, and our FPGA completed the test of the system to ensure that the data throughput of the USB 3.0 interface of the acquisition system is stable at 226 Mbps, and support dual-camera to work at full speed, that is, 54 fps@ (1280 × 960 + 328 × 248 × 3).

]]>Entropy doi: 10.3390/e23050545

Authors: Wei Cao Alex Dytso Michael Fauß H. Vincent Poor

Finite-sample bounds on the accuracy of Bhattacharya’s plug-in estimator for Fisher information are derived. These bounds are further improved by introducing a clipping step that allows for better control over the score function. This leads to superior upper bounds on the rates of convergence, albeit under slightly different regularity conditions. The performance bounds on both estimators are evaluated for the practically relevant case of a random variable contaminated by Gaussian noise. Moreover, using Brown’s identity, two corresponding estimators of the minimum mean-square error are proposed.

]]>Entropy doi: 10.3390/e23050544

Authors: Vasily E. Tarasov

In this paper, we proposed the exactly solvable model of non-Markovian dynamics of open quantum systems. This model describes open quantum systems with memory and periodic sequence of kicks by environment. To describe these systems, the Lindblad equation for quantum observable is generalized by taking into account power-law fading memory. Dynamics of open quantum systems with power-law memory are considered. The proposed generalized Lindblad equations describe non-Markovian quantum dynamics. The quantum dynamics with power-law memory are described by using integrations and differentiation of non-integer orders, as well as fractional calculus. An example of a quantum oscillator with linear friction and power-law memory is considered. In this paper, discrete-time quantum maps with memory, which are derived from generalized Lindblad equations without any approximations, are suggested. These maps exactly correspond to the generalized Lindblad equations, which are fractional differential equations with the Caputo derivatives of non-integer orders and periodic sequence of kicks that are represented by the Dirac delta-functions. The solution of these equations for coordinates and momenta are derived. The solutions of the generalized Lindblad equations for coordinate and momentum operators are obtained for open quantum systems with memory and kicks. Using these solutions, linear and nonlinear quantum discrete-time maps are derived.

]]>Entropy doi: 10.3390/e23050542

Authors: Shi Yu Jiaxin Wu Xianliang Meng Ruizhi Chu Xiao Li Guoguang Wu

In this study we investigated, using a simple polymer model of bacterial chromosome, the subdiffusive behaviors of both cytoplasmic particles and various loci in different cell wall confinements. Non-Gaussian subdiffusion of cytoplasmic particles as well as loci were obtained in our Langevin dynamic simulations, which agrees with fluorescence microscope observations. The effects of cytoplasmic particle size, locus position, confinement geometry, and density on motions of particles and loci were examined systematically. It is demonstrated that the cytoplasmic subdiffusion can largely be attributed to the mechanical properties of bacterial chromosomes rather than the viscoelasticity of cytoplasm. Due to the randomly positioned bacterial chromosome segments, the surrounding environment for both particle and loci is heterogeneous. Therefore, the exponent characterizing the subdiffusion of cytoplasmic particle/loci as well as Laplace displacement distributions of particle/loci can be reproduced by this simple model. Nevertheless, this bacterial chromosome model cannot explain the different responses of cytoplasmic particles and loci to external compression exerted on the bacterial cell wall, which suggests that the nonequilibrium activity, e.g., metabolic reactions, play an important role in cytoplasmic subdiffusion.

]]>Entropy doi: 10.3390/e23050541

Authors: Wei Wang Kaiming Yang Yu Zhu

Inducing self-motion illusions referred as vection are critical for improving the sensation of walking in virtual environments (VE). Adding viewpoint oscillations to a constant forward velocity in VE is effective for improving vection strength under static conditions. However, the effects of oscillation frequency and amplitude on vection strength under treadmill walking conditions are still unclear. Besides, due to the visuomotor entrainment mechanism, these visual oscillations would affect gait patterns and be detrimental for achieving natural walking if not properly designed. This study was aimed at determining the optimal frequency and amplitude of vertical viewpoint oscillations for improving vection strength and reducing gait constraints. Seven subjects walked on a treadmill while watching a visual scene. The visual scene presented a constant forward velocity equal to the treadmill velocity with different vertical viewpoint oscillations added. Five oscillation patterns with different combinations of frequency and amplitude were tested. Subjects gave verbal ratings of vection strength. The mediolateral (M-L) center of pressure (CoP) complexity was calculated to indicate gait constraints. After the experiment, subjects were asked to give the best and the worst oscillation pattern based on their walking experience. The oscillation frequency and amplitude had strong positive correlations with vection strength. The M-L CoP complexity was reduced under oscillations with low frequency. The medium oscillation amplitude had greater M-L CoP complexity than the small and large amplitude. Besides, subjects preferred those oscillation patterns with large gait complexity. We suggested that the oscillation amplitude with largest M-L CoP complexity should first be chosen to reduce gait constraints. Then, increasing the oscillation frequency to improve vection strength until individual preference or the boundary of motion sickness. These findings provide important guidelines to promote the sensation of natural walking in VE.

]]>Entropy doi: 10.3390/e23050540

Authors: Chang Yan Changchun Liu Lianke Yao Xinpei Wang Jikuo Wang Peng Li

Myocardial ischemia in patients with coronary artery disease (CAD) leads to imbalanced autonomic control that increases the risk of morbidity and mortality. To systematically examine how autonomic function responds to percutaneous coronary intervention (PCI) treatment, we analyzed data of 27 CAD patients who had admitted for PCI in this pilot study. For each patient, five-minute resting electrocardiogram (ECG) signals were collected before and after the PCI procedure. The time intervals between ECG collection and PCI were both within 24 h. To assess autonomic function, normal sinus RR intervals were extracted and were analyzed quantitatively using traditional linear time- and frequency-domain measures [i.e., standard deviation of the normal-normal intervals (SDNN), the root mean square of successive differences (RMSSD), powers of low frequency (LF) and high frequency (HF) components, LF/HF] and nonlinear entropy measures [i.e., sample entropy (SampEn), distribution entropy (DistEn), and conditional entropy (CE)], as well as graphical metrics derived from Poincaré plot [i.e., Porta’s index (PI), Guzik’s index (GI), slope index (SI) and area index (AI)]. Results showed that after PCI, AI and PI decreased significantly (p &lt; 0.002 and 0.015, respectively) with effect sizes of 0.88 and 0.70 as measured by Cohen’s d static. These changes were independent of sex. The results suggest that graphical AI and PI metrics derived from Poincaré plot of short-term ECG may be potential for sensing the beneficial effect of PCI on cardiovascular autonomic control. Further studies with bigger sample sizes are warranted to verify these observations.

]]>Entropy doi: 10.3390/e23050539

Authors: Ralf R. Müller

In 2017, Polyanskiy showed that the trade-off between power and bandwidth efficiency for massive Gaussian random access is governed by two fundamentally different regimes: low power and high power. For both regimes, tight performance bounds were found by Zadik et al., in 2019. This work utilizes recent results on the exact block error probability of Gaussian random codes in additive white Gaussian noise to propose practical methods based on iterative soft decoding to closely approach these bounds. In the low power regime, this work finds that orthogonal random codes can be applied directly. In the high power regime, a more sophisticated effort is needed. This work shows that power-profile optimization by means of linear programming, as pioneered by Caire et al. in 2001, is a promising strategy to apply. The proposed combination of orthogonal random coding and iterative soft decoding even outperforms the existence bounds of Zadik et al. in the low power regime and is very close to the non-existence bounds for message lengths around 100 and above. Finally, the approach of power optimization by linear programming proposed for the high power regime is found to benefit from power imbalances due to fading which makes it even more attractive for typical mobile radio channels.

]]>Entropy doi: 10.3390/e23050538

Authors: Ece C. Mutlu Ozlem Ozmen Garibay

Modeling the information of social contagion processes has recently attracted a substantial amount of interest from researchers due to its wide applicability in network science, multi-agent-systems, information science, and marketing. Unlike in biological spreading, the existence of a reinforcement effect in social contagion necessitates considering the complexity of individuals in the systems. Although many studies acknowledged the heterogeneity of the individuals in their adoption of information, there are no studies that take into account the individuals’ uncertainty during their adoption decision-making. This resulted in less than optimal modeling of social contagion dynamics in the existence of phase transition in the final adoption size versus transmission probability. We employed the Inverse Born Problem (IBP) to represent probabilistic entities as complex probability amplitudes in edge-based compartmental theory, and demonstrated that our novel approach performs better in the prediction of social contagion dynamics through extensive simulations on random regular networks.

]]>Entropy doi: 10.3390/e23050537

Authors: J. A. Scott Kelso

Coordination is a ubiquitous feature of all living things. It occurs by virtue of informational coupling among component parts and processes and can be quite specific (as when cells in the brain resonate to signals in the environment) or nonspecific (as when simple diffusion creates a source–sink dynamic for gene networks). Existing theoretical models of coordination—from bacteria to brains to social groups—typically focus on systems with very large numbers of elements (N→∞) or systems with only a few elements coupled together (typically N = 2). Though sharing a common inspiration in Nature’s propensity to generate dynamic patterns, both approaches have proceeded largely independent of each other. Ideally, one would like a theory that applies to phenomena observed on all scales. Recent experimental research by Mengsen Zhang and colleagues on intermediate-sized ensembles (in between the few and the many) proves to be the key to uniting large- and small-scale theories of coordination. Disorder–order transitions, multistability, order–order phase transitions, and especially metastability are shown to figure prominently on multiple levels of description, suggestive of a basic Coordination Dynamics that operates on all scales. This unified coordination dynamics turns out to be a marriage of two well-known models of large- and small-scale coordination: the former based on statistical mechanics (Kuramoto) and the latter based on the concepts of Synergetics and nonlinear dynamics (extended Haken–Kelso–Bunz or HKB). We show that models of the many and the few, previously quite unconnected, are thereby unified in a single formulation. The research has led to novel topological methods to handle the higher-dimensional dynamics of coordination in complex systems and has implications not only for understanding coordination but also for the design of (biorhythm inspired) computers.

]]>Entropy doi: 10.3390/e23050536

Authors: Lingen Chen Zewei Meng Yanlin Ge Feng Wu

An irreversible combined Carnot cycle model using ideal quantum gases as a working medium was studied by using finite-time thermodynamics. The combined cycle consisted of two Carnot sub-cycles in a cascade mode. Considering thermal resistance, internal irreversibility, and heat leakage losses, the power output and thermal efficiency of the irreversible combined Carnot cycle were derived by utilizing the quantum gas state equation. The temperature effect of the working medium on power output and thermal efficiency is analyzed by numerical method, the optimal relationship between power output and thermal efficiency is solved by the Euler-Lagrange equation, and the effects of different working mediums on the optimal power and thermal efficiency performance are also focused. The results show that there is a set of working medium temperatures that makes the power output of the combined cycle be maximum. When there is no heat leakage loss in the combined cycle, all the characteristic curves of optimal power versus thermal efficiency are parabolic-like ones, and the internal irreversibility makes both power output and efficiency decrease. When there is heat leakage loss in the combined cycle, all the characteristic curves of optimal power versus thermal efficiency are loop-shaped ones, and the heat leakage loss only affects the thermal efficiency of the combined Carnot cycle. Comparing the power output of combined heat engines with four types of working mediums, the two-stage combined Carnot cycle using ideal Fermi-Bose gas as working medium obtains the highest power output.

]]>Entropy doi: 10.3390/e23050535

Authors: Karim H. Moussa Ahmed I. El Naggary Heba G. Mohamed

Multimedia wireless communications have rapidly developed over the years. Accordingly, an increasing demand for more secured media transmission is required to protect multimedia contents. Image encryption schemes have been proposed over the years, but the most secure and reliable schemes are those based on chaotic maps, due to the intrinsic features in such kinds of multimedia contents regarding the pixels’ high correlation and data handling capabilities. The novel proposed encryption algorithm introduced in this article is based on a 3D hopping chaotic map instead of fixed chaotic logistic maps. The non-linearity behavior of the proposed algorithm, in terms of both position permutation and value transformation, results in a more secured encryption algorithm due to its non-convergence, non-periodicity, and sensitivity to the applied initial conditions. Several statistical and analytical tests such as entropy, correlation, key sensitivity, key space, peak signal-to-noise ratio, noise attacks, number of pixels changing rate (NPCR), unified average change intensity randomness (UACI), and others tests were applied to measure the strength of the proposed encryption scheme. The obtained results prove that the proposed scheme is very robust against different cryptography attacks compared to similar encryption schemes.

]]>Entropy doi: 10.3390/e23050534

Authors: Ron A. Pepino

Atomtronics is a relatively new subfield of atomic physics that aims to realize the device behavior of electronic components in ultracold atom-optical systems. The fact that these systems are coherent makes them particularly interesting since, in addition to current, one can impart quantum states onto the current carriers themselves or perhaps perform quantum computational operations on them. After reviewing the fundamental ideas of this subfield, we report on the theoretical and experimental progress made towards developing externally-driven and closed loop devices. The functionality and potential applications for these atom analogs to electronic and spintronic systems is also discussed.

]]>Entropy doi: 10.3390/e23050533

Authors: Milan S. Derpich Jan Østergaard

We present novel data-processing inequalities relating the mutual information and the directed information in systems with feedback. The internal deterministic blocks within such systems are restricted only to be causal mappings, but are allowed to be non-linear and time varying, and randomized by their own external random input, can yield any stochastic mapping. These randomized blocks can for example represent source encoders, decoders, or even communication channels. Moreover, the involved signals can be arbitrarily distributed. Our first main result relates mutual and directed information and can be interpreted as a law of conservation of information flow. Our second main result is a pair of data-processing inequalities (one the conditional version of the other) between nested pairs of random sequences entirely within the closed loop. Our third main result introduces and characterizes the notion of in-the-loop (ITL) transmission rate for channel coding scenarios in which the messages are internal to the loop. Interestingly, in this case the conventional notions of transmission rate associated with the entropy of the messages and of channel capacity based on maximizing the mutual information between the messages and the output turn out to be inadequate. Instead, as we show, the ITL transmission rate is the unique notion of rate for which a channel code attains zero error probability if and only if such an ITL rate does not exceed the corresponding directed information rate from messages to decoded messages. We apply our data-processing inequalities to show that the supremum of achievable (in the usual channel coding sense) ITL transmission rates is upper bounded by the supremum of the directed information rate across the communication channel. Moreover, we present an example in which this upper bound is attained. Finally, we further illustrate the applicability of our results by discussing how they make possible the generalization of two fundamental inequalities known in networked control literature.

]]>Entropy doi: 10.3390/e23050532

Authors: Won-Suk Kim

Edge computing can deliver network services with low latency and real-time processing by providing cloud services at the network edge. Edge computing has a number of advantages such as low latency, locality, and network traffic distribution, but the associated resource management has become a significant challenge because of its inherent hierarchical, distributed, and heterogeneous nature. Various cloud-based network services such as crowd sensing, hierarchical deep learning systems, and cloud gaming each have their own traffic patterns and computing requirements. To provide a satisfactory user experience for these services, resource management that comprehensively considers service diversity, client usage patterns, and network performance indicators is required. In this study, an algorithm that simultaneously considers computing resources and network traffic load when deploying servers that provide edge services is proposed. The proposed algorithm generates candidate deployments based on factors that affect traffic load, such as the number of servers, server location, and client mapping according to service characteristics and usage. A final deployment plan is then established using a partial vector bin packing scheme that considers both the generated traffic and computing resources in the network. The proposed algorithm is evaluated using several simulations that consider actual network service and device characteristics.

]]>Entropy doi: 10.3390/e23050531

Authors: Ferdinando Di Martino Salvatore Sessa

Cluster techniques are used in hotspot spatial analysis to detect hotspots as areas on the map; an extension of the Fuzzy C-means that the clustering algorithm has been applied to locate hotspots on the map as circular areas; it represents a good trade-off between the accuracy in the detection of the hotspot shape and the computational complexity. However, this method does not measure the reliability of the detected hotspots and therefore does not allow us to evaluate how reliable the identification of a hotspot of a circular area corresponding to the detected cluster is; a measure of the reliability of hotspots is crucial for the decision maker to assess the need for action on the area circumscribed by the hotspots. We propose a method based on the use of De Luca and Termini’s Fuzzy Entropy that uses this extension of the Fuzzy C-means algorithm and measures the reliability of detected hotspots. We test our method in a disease analysis problem in which hotspots corresponding to areas where most oto-laryngo-pharyngeal patients reside, within a geographical area constituted by the province of Naples, Italy, are detected as circular areas. The results show a dependency between the reliability and fluctuation of the values of the degrees of belonging to the hotspots.

]]>Entropy doi: 10.3390/e23050530

Authors: Milton Silva Diogo Pratas Armando J. Pinho

Recently, the scientific community has witnessed a substantial increase in the generation of protein sequence data, triggering emergent challenges of increasing importance, namely efficient storage and improved data analysis. For both applications, data compression is a straightforward solution. However, in the literature, the number of specific protein sequence compressors is relatively low. Moreover, these specialized compressors marginally improve the compression ratio over the best general-purpose compressors. In this paper, we present AC2, a new lossless data compressor for protein (or amino acid) sequences. AC2 uses a neural network to mix experts with a stacked generalization approach and individual cache-hash memory models to the highest-context orders. Compared to the previous compressor (AC), we show gains of 2–9% and 6–7% in reference-free and reference-based modes, respectively. These gains come at the cost of three times slower computations. AC2 also improves memory usage against AC, with requirements about seven times lower, without being affected by the sequences’ input size. As an analysis application, we use AC2 to measure the similarity between each SARS-CoV-2 protein sequence with each viral protein sequence from the whole UniProt database. The results consistently show higher similarity to the pangolin coronavirus, followed by the bat and human coronaviruses, contributing with critical results to a current controversial subject. AC2 is available for free download under GPLv3 license.

]]>Entropy doi: 10.3390/e23050529

Authors: Mahdi Rabbani Yongli Wang Reza Khoshkangini Hamed Jelodar Ruxin Zhao Sajjad Bagheri Baba Ahmadi Seyedvalyallah Ayobi

Network anomaly detection systems (NADSs) play a significant role in every network defense system as they detect and prevent malicious activities. Therefore, this paper offers an exhaustive overview of different aspects of anomaly-based network intrusion detection systems (NIDSs). Additionally, contemporary malicious activities in network systems and the important properties of intrusion detection systems are discussed as well. The present survey explains important phases of NADSs, such as pre-processing, feature extraction and malicious behavior detection and recognition. In addition, with regard to the detection and recognition phase, recent machine learning approaches including supervised, unsupervised, new deep and ensemble learning techniques have been comprehensively discussed; moreover, some details about currently available benchmark datasets for training and evaluating machine learning techniques are provided by the researchers. In the end, potential challenges together with some future directions for machine learning-based NADSs are specified.

]]>Entropy doi: 10.3390/e23050528

Authors: Masanari Kimura Hideitsu Hino

The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter λ, with the other distribution. Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution. In this paper, an information geometric generalization of the skew divergence called the α-geodesical skew divergence is proposed, and its properties are studied.

]]>Entropy doi: 10.3390/e23050527

Authors: Huibin Shi Wenlong Fu Bailin Li Kaixuan Shao Duanhao Yang

Rolling bearings act as key parts in many items of mechanical equipment and any abnormality will affect the normal operation of the entire apparatus. To diagnose the faults of rolling bearings effectively, a novel fault identification method is proposed by merging variational mode decomposition (VMD), average refined composite multiscale dispersion entropy (ARCMDE) and support vector machine (SVM) optimized by multistrategy enhanced swarm optimization in this paper. Firstly, the vibration signals are decomposed into different series of intrinsic mode functions (IMFs) based on VMD with the center frequency observation method. Subsequently, the proposed ARCMDE, fusing the superiorities of DE and average refined composite multiscale procedure, is employed to enhance the ability of the multiscale fault-feature extraction from the IMFs. Afterwards, grey wolf optimization (GWO), enhanced by multistrategy including levy flight, cosine factor and polynomial mutation strategies (LCPGWO), is proposed to optimize the penalty factor C and kernel parameter g of SVM. Then, the optimized SVM model is trained to identify the fault type of samples based on features extracted by ARCMDE. Finally, the application experiment and contrastive analysis verify the effectiveness of the proposed VMD-ARCMDE-LCPGWO-SVM method.

]]>Entropy doi: 10.3390/e23050526

Authors: Kajari Gupta Milan Paluš

An information-theoretic approach for detecting causality and information transfer was applied to phases and amplitudes of oscillatory components related to different time scales and obtained using the wavelet transform from a time series generated by the Epileptor model. Three main time scales and their causal interactions were identified in the simulated epileptic seizures, in agreement with the interactions of the model variables. An approach consisting of wavelet transform, conditional mutual information estimation, and surrogate data testing applied to a single time series generated by the model was demonstrated to be successful in the identification of all directional (causal) interactions between the three different time scales described in the model. Thus, the methodology was prepared for the identification of causal cross-frequency phase–phase and phase–amplitude interactions in experimental and clinical neural data.

]]>Entropy doi: 10.3390/e23050525

Authors: Ming-Chung Chou

Anode heel effects are known to cause non-uniform image quality, but no method has been proposed to evaluate the non-uniform image quality caused by the heel effect. Therefore, the purpose of this study was to evaluate non-uniform image quality in digital radiographs using a novel circular step-wedge (CSW) phantom and normalized mutual information (nMI). All X-ray images were acquired from a digital radiography system equipped with a CsI flat panel detector. A new acrylic CSW phantom was imaged ten times at various kVp and mAs to evaluate overall and non-uniform image quality with nMI metrics. For comparisons, a conventional contrast-detail resolution phantom was imaged ten times at identical exposure parameters to evaluate overall image quality with visible ratio (VR) metrics, and the phantom was placed in different orientations to assess non-uniform image quality. In addition, heel effect correction (HEC) was executed to elucidate the impact of its effect on image quality. The results showed that both nMI and VR metrics significantly changed with kVp and mAs, and had a significant positive correlation. The positive correlation is suggestive that the nMI metrics have a similar performance to the VR metrics in assessing the overall image quality of digital radiographs. The nMI metrics significantly changed with orientations and also significantly increased after HEC in the anode direction. However, the VR metrics did not change significantly with orientations or with HEC. The results indicate that the nMI metrics were more sensitive than the VR metrics with regards to non-uniform image quality caused by the anode heel effect. In conclusion, the proposed nMI metrics with a CSW phantom outperformed the conventional VR metrics in detecting non-uniform image quality caused by the heel effect, and thus are suitable for quantitatively evaluating non-uniform image quality in digital radiographs with and without HEC.

]]>Entropy doi: 10.3390/e23050524

Authors: Alojz Poredoš

Energy consumption for heating and cooling in buildings and industry accounts for almost half of total energy consumption in all sectors [...]

]]>Entropy doi: 10.3390/e23050523

Authors: Gábor Papp Imre Kondor Fabio Caccioli

Expected Shortfall (ES), the average loss above a high quantile, is the current financial regulatory market risk measure. Its estimation and optimization are highly unstable against sample fluctuations and become impossible above a critical ratio r=N/T, where N is the number of different assets in the portfolio, and T is the length of the available time series. The critical ratio depends on the confidence level α, which means we have a line of critical points on the α−r plane. The large fluctuations in the estimation of ES can be attenuated by the application of regularizers. In this paper, we calculate ES analytically under an ℓ1 regularizer by the method of replicas borrowed from the statistical physics of random systems. The ban on short selling, i.e., a constraint rendering all the portfolio weights non-negative, is a special case of an asymmetric ℓ1 regularizer. Results are presented for the out-of-sample and the in-sample estimator of the regularized ES, the estimation error, the distribution of the optimal portfolio weights, and the density of the assets eliminated from the portfolio by the regularizer. It is shown that the no-short constraint acts as a high volatility cutoff, in the sense that it sets the weights of the high volatility elements to zero with higher probability than those of the low volatility items. This cutoff renormalizes the aspect ratio r=N/T, thereby extending the range of the feasibility of optimization. We find that there is a nontrivial mapping between the regularized and unregularized problems, corresponding to a renormalization of the order parameters.

]]>Entropy doi: 10.3390/e23050522

Authors: Minhui Hu Kaiwei Zeng Yaohua Wang Yang Guo

Unsupervised domain adaptation is a challenging task in person re-identification (re-ID). Recently, cluster-based methods achieve good performance; clustering and training are two important phases in these methods. For clustering, one major issue of existing methods is that they do not fully exploit the information in outliers by either discarding outliers in clusters or simply merging outliers. For training, existing methods only use source features for pretraining and target features for fine-tuning and do not make full use of all valuable information in source datasets and target datasets. To solve these problems, we propose a Threshold-based Hierarchical clustering method with Contrastive loss (THC). There are two features of THC: (1) it regards outliers as single-sample clusters to participate in training. It well preserves the information in outliers without setting cluster number and combines advantages of existing clustering methods; (2) it uses contrastive loss to make full use of all valuable information, including source-class centroids, target-cluster centroids and single-sample clusters, thus achieving better performance. We conduct extensive experiments on Market-1501, DukeMTMC-reID and MSMT17. Results show our method achieves state of the art.

]]>Entropy doi: 10.3390/e23050521

Authors: Zhenyu Zhang Huirong Zhang Lixin Zhou Yanfeng Li

The successful diffusion of mobile applications in user groups can establish a good image for enterprises, gain a good reputation, fight for market share, and create commercial profits. Thus, it is of great significance for the successful diffusion of mobile applications to study mobile application diffusion and social network coevolution. Firstly, combined with a social network’s dynamic change characteristics in real life, a mobile application users’ social network evolution mechanism was designed. Then, a multi-agent model of the coevolution of a social network and mobile application innovation diffusion was constructed. Finally, the impact of mobile applications’ value perception revenue, use cost, marketing promotion investment, and the number of seed users on the coevolution of social network and mobile application diffusion were analyzed. The results show that factors such as the network structure, the perceived value income, the cost of use, the marketing promotion investment, and the number of seed users have an important impact on mobile application diffusion.

]]>Entropy doi: 10.3390/e23050520

Authors: Tao Liang Hao Lu Hexu Sun

The decomposition effect of variational mode decomposition (VMD) mainly depends on the choice of decomposition number K and penalty factor α. For the selection of two parameters, the empirical method and single objective optimization method are usually used, but the aforementioned methods often have limitations and cannot achieve the optimal effects. Therefore, a multi-objective multi-island genetic algorithm (MIGA) is proposed to optimize the parameters of VMD and apply it to feature extraction of bearing fault. First, the envelope entropy (Ee) can reflect the sparsity of the signal, and Renyi entropy (Re) can reflect the energy aggregation degree of the time-frequency distribution of the signal. Therefore, Ee and Re are selected as fitness functions, and the optimal solution of VMD parameters is obtained by the MIGA algorithm. Second, the improved VMD algorithm is used to decompose the bearing fault signal, and then two intrinsic mode functions (IMF) with the most fault information are selected by improved kurtosis and Holder coefficient for reconstruction. Finally, the envelope spectrum of the reconstructed signal is analyzed. The analysis of comparative experiments shows that the feature extraction method can extract bearing fault features more accurately, and the fault diagnosis model based on this method has higher accuracy.

]]>Entropy doi: 10.3390/e23050519

Authors: Karl Svozil

If quantum mechanics is taken for granted, the randomness derived from it may be vacuous or even delusional, yet sufficient for many practical purposes. “Random” quantum events are intimately related to the emergence of both space-time as well as the identification of physical properties through which so-called objects are aggregated. We also present a brief review of the metaphysics of indeterminism.

]]>Entropy doi: 10.3390/e23050518

Authors: Osamu Komori Shinto Eguchi

Clustering is a major unsupervised learning algorithm and is widely applied in data mining and statistical data analyses. Typical examples include k-means, fuzzy c-means, and Gaussian mixture models, which are categorized into hard, soft, and model-based clusterings, respectively. We propose a new clustering, called Pareto clustering, based on the Kolmogorov–Nagumo average, which is defined by a survival function of the Pareto distribution. The proposed algorithm incorporates all the aforementioned clusterings plus maximum-entropy clustering. We introduce a probabilistic framework for the proposed method, in which the underlying distribution to give consistency is discussed. We build the minorize-maximization algorithm to estimate the parameters in Pareto clustering. We compare the performance with existing methods in simulation studies and in benchmark dataset analyses to demonstrate its highly practical utilities.

]]>Entropy doi: 10.3390/e23050517

Authors: Leonardo Rydin Gorjão Dirk Witthaut Klaus Lehnertz Pedro G. Lind

With the aim of improving the reconstruction of stochastic evolution equations from empirical time-series data, we derive a full representation of the generator of the Kramers–Moyal operator via a power-series expansion of the exponential operator. This expansion is necessary for deriving the different terms in a stochastic differential equation. With the full representation of this operator, we are able to separate finite-time corrections of the power-series expansion of arbitrary order into terms with and without derivatives of the Kramers–Moyal coefficients. We arrive at a closed-form solution expressed through conditional moments, which can be extracted directly from time-series data with a finite sampling intervals. We provide all finite-time correction terms for parametric and non-parametric estimation of the Kramers–Moyal coefficients for discontinuous processes which can be easily implemented—employing Bell polynomials—in time-series analyses of stochastic processes. With exemplary cases of insufficiently sampled diffusion and jump-diffusion processes, we demonstrate the advantages of our arbitrary-order finite-time corrections and their impact in distinguishing diffusion and jump-diffusion processes strictly from time-series data.

]]>Entropy doi: 10.3390/e23050516

Authors: Yanqiang Guo Tong Liu Tong Zhao Haojie Zhang Xiaomin Guo

By frequency-band extracting, we experimentally and theoretically investigate time-delay signature (TDS) suppression and entropy growth enhancement of a chaotic optical-feedback semiconductor laser under different injection currents and feedback strengths. The TDS and entropy growth are quantified by the peak value of autocorrelation function and the difference of permutation entropy at the feedback delay time. At the optimal extracting bandwidth, the measured TDS is suppressed up to 96% compared to the original chaos, and the entropy growth is higher than the noise-dominated threshold, indicating that the dynamical process is noisy. The effects of extracting bandwidth and radio frequencies on the TDS and entropy growth are also clarified experimentally and theoretically. The experimental results are in good agreements with the theoretical results. The skewness of the laser intensity distribution is effectively improved to 0.001 with the optimal extracting bandwidth. This technique provides a promising tool to extract randomness and prepare desired entropy sources for chaotic secure communication and random number generation.

]]>Entropy doi: 10.3390/e23050515

Authors: Kai-Yuan Lai Yu-Tang Lee Ta-Hua Lai Yao-Hsien Liu

This study examined the trilateral flash cycle characteristics (TFC) and partially evaporating cycle (PEC) using a low-grade heat source at 80 °C. The evaporation temperature and mass flow rate of the working fluids and the expander inlet’s quality were optimized through pinch point observation. This can help advance methods in determining the best design points and their operating conditions. The results indicated the partially evaporating cycle could solve the high-volume ratio problem without sacrificing the net power and thermal efficiency performance. When the system operation’s saturation temperature decreased by 10 °C, the net power, thermal efficiency, and volume ratio of the trilateral flash cycle system decreased by approximately 20%. Conversely, with the same operational conditions, the net power and thermal efficiency of the partially evaporating cycle system decreased by only approximately 3%; however, the volume ratio decreased by more than 50%. When the system operating temperature was under 63 °C, each fluid’s volume ratio could decrease to approximately 5. The problem of high excessive expansion would be solved from the features of the partially evaporating cycle, and it will keep the ideal power generation efficiency and improve expander manufacturing.

]]>