Entropy doi: 10.3390/e22060594

Authors: George Livadiotis

The concept of duality of probability distributions constitutes a fundamental &ldquo;brick&rdquo; in the solid framework of nonextensive statistical mechanics&mdash;the generalization of Boltzmann&ndash;Gibbs statistical mechanics under the consideration of the q-entropy. The probability duality is solving old-standing issues of the theory, e.g., it ascertains the additivity for the internal energy given the additivity in the energy of microstates. However, it is a rather complex part of the theory, and certainly, it cannot be trivially explained along the Gibb&rsquo;s path of entropy maximization. Recently, it was shown that an alternative picture exists, considering a dual entropy, instead of a dual probability. In particular, the framework of nonextensive statistical mechanics can be equivalently developed using q- and 1/q- entropies. The canonical probability distribution coincides again with the known q-exponential distribution, but without the necessity of the duality of ordinary-escort probabilities. Furthermore, it is shown that the dual entropies, q-entropy and 1/q-entropy, as well as, the 1-entropy, are involved in an identity, useful in theoretical development and applications.

]]>Entropy doi: 10.3390/e22060593

Authors: Gareth Hughes

The predictive receiver operating characteristic (PROC) curve is a diagrammatic format with application in the statistical evaluation of probabilistic disease forecasts. The PROC curve differs from the more well-known receiver operating characteristic (ROC) curve in that it provides a basis for evaluation using metrics defined conditionally on the outcome of the forecast rather than metrics defined conditionally on the actual disease status. Starting from the binormal ROC curve formulation, an overview of some previously published binormal PROC curves is presented in order to place the PROC curve in the context of other methods used in statistical evaluation of probabilistic disease forecasts based on the analysis of predictive values; in particular, the index of separation (PSEP) and the leaf plot. An information theoretic perspective on evaluation is also outlined. Five straightforward recommendations are made with a view to aiding understanding and interpretation of the sometimes-complex patterns generated by PROC curve analysis. The PROC curve and related analyses augment the perspective provided by traditional ROC curve analysis. Here, the binormal ROC model provides the exemplar for investigation of the PROC curve, but potential application extends to analysis based on other distributional models as well as to empirical analysis.

]]>Entropy doi: 10.3390/e22050592

Authors: Mohamed Mansour Mahdi Rasekhi Mohamed Ibrahim Khaoula Aidi Haitham Yousof Enayat Elrazik

In this paper, we first study a new two parameter lifetime distribution. This distribution includes “monotone” and “non-monotone” hazard rate functions which are useful in lifetime data analysis and reliability. Some of its mathematical properties including explicit expressions for the ordinary and incomplete moments, generating function, Renyi entropy, δ-entropy, order statistics and probability weighted moments are derived. Non-Bayesian estimation methods such as the maximum likelihood, Cramer-Von-Mises, percentile estimation, and L-moments are used for estimating the model parameters. The importance and flexibility of the new distribution are illustrated by means of two applications to real data sets. Using the approach of the Bagdonavicius–Nikulin goodness-of-fit test for the right censored validation, we then propose and apply a modified chi-square goodness-of-fit test for the Burr X Weibull model. The modified goodness-of-fit statistics test is applied for the right censored real data set. Based on the censored maximum likelihood estimators on initial data, the modified goodness-of-fit test recovers the loss in information while the grouped data follows the chi-square distribution. The elements of the modified criteria tests are derived. A real data application is for validation under the uncensored scheme.

]]>Entropy doi: 10.3390/e22050591

Authors: Shanyun Liu Rui She Zheqi Zhu Pingyi Fan

This paper mainly focuses on the problem of lossy compression storage based on the data value that represents the subjective assessment of users when the storage size is still not enough after the conventional lossless data compression. To this end, we transform this problem to an optimization, which pursues the least importance-weighted reconstruction error in data reconstruction within limited total storage size, where the importance is adopted to characterize the data value from the viewpoint of users. Based on it, this paper puts forward an optimal allocation strategy in the storage of digital data by the exponential distortion measurement, which can make rational use of all the storage space. In fact, the theoretical results show that it is a kind of restrictive water-filling. It also characterizes the trade-off between the relative weighted reconstruction error and the available storage size. Consequently, if a relatively small part of total data value is allowed to lose, this strategy will improve the performance of data compression. Furthermore, this paper also presents that both the users&rsquo; preferences and the special characteristics of data distribution can trigger the small-probability event scenarios where only a fraction of data can cover the vast majority of users&rsquo; interests. Whether it is for one of the reasons above, the data with highly clustered message importance is beneficial to compression storage. In contrast, from the perspective of optimal storage space allocation based on data value, the data with a uniform information distribution is incompressible, which is consistent with that in the information theory.

]]>Entropy doi: 10.3390/e22050590

Authors: Alexis Lozano Pedro Cabrera Ana M. Blanco-Marigorta

Technological innovations are not enough by themselves to achieve social and environmental sustainability in companies. Sustainable development aims to determine the environmental impact of a product and the hidden price of products and services through the concept of radical transparency. This means that companies should show and disclose the impact on the environment of any good or service. This way, the consumer can choose in a transparent manner, not only for the price. The use of the eco-label as a European eco-label, which bases its criteria on life cycle assessment, could provide an indicator of corporate social responsibility for a given product. However, it does not give a full guarantee that the product was obtained in a sustainable manner. The aim of this work is to provide a way of calculating the value of the environmental impacts of an industrial product, under different operating conditions, so that each company can provide detailed information on the impacts of its products, information that can form part of its "green product sheet". As a case study, the daily production of a newspaper, printed by coldset, has been chosen. Each process involved in production was configured with raw material and energy consumption information from production plants, manufacturer data and existing databases. Four non-linear regression models have been trained to estimate the impact of a newspaper&rsquo;s circulation from five input variables (pages, grammage, height, paper type, and print run) with 5508 data samples each. These non-linear regression models were trained using the Levenberg&ndash;Marquardt nonlinear least squares algorithm. The mean absolute percentage errors (MAPE) obtained by all the non-linear regression models tested were less than 5%. Through the proposed correlations, it is possible to obtain a score that reports on the impact of the product for different operating conditions and several types of raw materials. Ecolabelling can be further developed by incorporating a scoring system for the impact caused by the product or process, using a standardised impact methodology.

]]>Entropy doi: 10.3390/e22050589

Authors: Cheng-Yi Lin Ja-Ling Wu

In theory, high key and high plaintext sensitivities are a must for a cryptosystem to resist the chosen/known plaintext and the differential attacks. High plaintext sensitivity can be achieved by ensuring that each encrypted result is plaintext-dependent. In this work, we make detailed cryptanalysis on a published chaotic map-based image encryption system, where the encryption process is plaintext Image dependent. We show that some designing flaws make the published cryptosystem vulnerable to chosen-plaintext attack, and we then proposed an enhanced algorithm to overcome those flaws.

]]>Entropy doi: 10.3390/e22050588

Authors: Xin Yang Xian Zhao Xu Gong Xiaoguang Yang Chuangxia Huang

The investigation of the systemic importance of financial institutions (SIFIs) has become a hot topic in the field of financial risk management. By making full use of 5-min high-frequency data, and with the help of the method of entropy weight technique for order preference by similarities to ideal solution (TOPSIS), this paper builds jump volatility spillover network of China&rsquo;s financial institutions to measure the SIFIs. We find that: (i) state-owned depositories and large insurers display SIFIs according to the score of entropy weight TOPSIS; (ii) total connectedness of financial institution networks reveal that Industrial Bank, Ping An Bank and Pacific Securities play an important role when financial market is under pressure, especially during the subprime crisis, the European sovereign debt crisis and China&rsquo;s stock market disaster; (iii) an interesting finding shows that some small financial institutions are also SIFIs during the financial crisis and cannot be ignored.

]]>Entropy doi: 10.3390/e22050587

Authors: Nestor Caticha

We study the dynamics of information processing in the continuum depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by a NN is analogous to the identification of the few variables that characterize the thermodynamic state obtained by the RG from microstates. To see this, we encode the information about the weights of a NN in a Maxent family of distributions. The location hyper-parameters represent the weights estimates. Bayesian learning of a new example determine new constraints on the generators of the family, yielding a new probability distribution which can be seen as an entropic dynamics of learning, yielding a learning dynamics where the hyper-parameters change along the gradient of the evidence. For a feed-forward architecture the evidence can be written recursively from the evidence up to the previous layer convoluted with an aggregation kernel . The continuum limit leads to a diffusion-like PDE analogous to Wilson&rsquo;s RG but with an aggregation kernel that depends on the weights of the NN, different from those that integrate out ultraviolet degrees of freedom. This can be recast in the language of dynamical programming with an associated Hamilton&ndash;Jacobi&ndash;Bellman equation for the evidence, where the control is the set of weights of the neural network.

]]>Entropy doi: 10.3390/e22050586

Authors: Julio A. López-Saldívar Margarita A. Man’ko Vladimir I. Man’ko

In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in generic form the differential equations for the covariance matrix, the mean values, and the density matrix parameters of a multipartite Gaussian state, unitarily evolving according to a Hamiltonian H ^ . We also present the corresponding differential equations, which describe the nonunitary evolution of the subsystems. The resulting nonlinear equations are used to solve the dynamics of the system instead of the Schr&ouml;dinger equation. The formalism elaborated allows us to define new specific invariant and quasi-invariant states, as well as states with invariant covariance matrices, i.e., states were only the mean values evolve according to the classical Hamilton equations. By using density matrices in the position and in the tomographic-probability representations, we study examples of these properties. As examples, we present novel invariant states for the two-mode frequency converter and quasi-invariant states for the bipartite parametric amplifier.

]]>Entropy doi: 10.3390/e22050585

Authors: Jiangyi Wang Xiaoqiang Hua Xinwu Zeng

The symmetric positive definite (SPD) matrix has attracted much attention in classification problems because of its remarkable performance, which is due to the underlying structure of the Riemannian manifold with non-negative curvature as well as the use of non-linear geometric metrics, which have a stronger ability to distinguish SPD matrices and reduce information loss compared to the Euclidean metric. In this paper, we propose a spectral-based SPD matrix signal detection method with deep learning that uses time-frequency spectra to construct SPD matrices and then exploits a deep SPD matrix learning network to detect the target signal. Using this approach, the signal detection problem is transformed into a binary classification problem on a manifold to judge whether the input sample has target signal or not. Two matrix models are applied, namely, an SPD matrix based on spectral covariance and an SPD matrix based on spectral transformation. A simulated-signal dataset and a semi-physical simulated-signal dataset are used to demonstrate that the spectral-based SPD matrix signal detection method with deep learning has a gain of 1.7&ndash;3.3 dB under appropriate conditions. The results show that our proposed method achieves better detection performances than its state-of-the-art spectral counterparts that use convolutional neural networks.

]]>Entropy doi: 10.3390/e22050584

Authors: Rossi Murari Gaudio

Determining the coupling between systems remains a topic of active research in the field of complex science. Identifying the proper causal influences in time series can already be very challenging in the trivariate case, particularly when the interactions are non-linear. In this paper, the coupling between three Lorenz systems is investigated with the help of specifically designed artificial neural networks, called time delay neural networks (TDNNs). TDNNs can learn from their previous inputs and are therefore well suited to extract the causal relationship between time series. The performances of the TDNNs tested have always been very positive, showing an excellent capability to identify the correct causal relationships in absence of significant noise. The first tests on the time localization of the mutual influences and the effects of Gaussian noise have also provided very encouraging results. Even if further assessments are necessary, the networks of the proposed architecture have the potential to be a good complement to the other techniques available in the market for the investigation of mutual influences between time series.

]]>Entropy doi: 10.3390/e22050583

Authors: Nicholas V. Sarlis Efthimios S. Skordas Stavros-Richard G. Christopoulos Panayiotis A. Varotsos

It has been reported that major earthquakes are preceded by Seismic Electric Signals (SES). Observations show that in the natural time analysis of an earthquake (EQ) catalog, an SES activity starts when the fluctuations of the order parameter of seismicity exhibit a minimum. Fifteen distinct minima&mdash;observed simultaneously at two different natural time scales and deeper than a certain threshold&mdash;are found on analyzing the seismicity of Japan from 1 January 1984 to 11 March 2011 (the time of the M9 Tohoku EQ occurrence) 1 to 3 months before large EQs. Six (out of 15) of these minima preceded all shallow EQs of magnitude 7.6 or larger, while nine are followed by smaller EQs. The latter false positives can be excluded by a proper procedure (J. Geophys. Res. Space Physics 2014, 119, 9192&ndash;9206) that considers aspects of EQ networks based on similar activity patterns. These results are studied here by means of the receiver operating characteristics (ROC) technique by focusing on the area under the ROC curve (AUC). If this area, which is currently considered an effective way to summarize the overall diagnostic accuracy of a test, has the value 1, it corresponds to a perfectly accurate test. Here, we find that the AUC is around 0.95 which is evaluated as outstanding.

]]>Entropy doi: 10.3390/e22050582

Authors: Francesco Buono Maria Longobardi

The extropy has recently been introduced as the dual concept of entropy. Moreover, in the context of the Dempster&ndash;Shafer evidence theory, Deng studied a new measure of discrimination, named the Deng entropy. In this paper, we define the Deng extropy and study its relation with Deng entropy, and examples are proposed in order to compare them. The behaviour of Deng extropy is studied under changes of focal elements. A characterization result is given for the maximum Deng extropy and, finally, a numerical example in pattern recognition is discussed in order to highlight the relevance of the new measure.

]]>Entropy doi: 10.3390/e22050581

Authors: Yunus Can Gültekin Tobias Fehenberger Alex Alvarado Frans M. J. Willems

In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the blocklength decreases, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths over the additive white Gaussian noise (AWGN) channel, the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM) and SpSh are reviewed as energy-efficient shaping techniques. Numerical results show that both have smaller rate losses than CCDM. SpSh&mdash;whose sole objective is to maximize the energy efficiency&mdash;is shown to have the minimum rate loss amongst all , which is particularly apparent for ultra short blocklengths. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspectives of latency, storage and computations.

]]>Entropy doi: 10.3390/e22050580

Authors: Nicholas M. Timme David Linsenbardt Christopher C. Lapish

Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided.

]]>Entropy doi: 10.3390/e22050579

Authors: Tahir Jalal Kim

Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky&ndash;Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the &ldquo;leave-one-out&rdquo; cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man&ndash;machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance.

]]>Entropy doi: 10.3390/e22050578

Authors: Sangyeol Lee Chang Kyeom Kim Sangjo Lee

This study considers the problem of detecting a change in the conditional variance of time series with time-varying volatilities based on the cumulative sum (CUSUM) of squares test using the residuals from support vector regression (SVR)-generalized autoregressive conditional heteroscedastic (GARCH) models. To compute the residuals, we first fit SVR-GARCH models with different tuning parameters utilizing a time series of training set. We then obtain the best SVR-GARCH model with the optimal tuning parameters via a time series of the validation set. Subsequently, based on the selected model, we obtain the residuals, as well as the estimates of the conditional volatility and employ these to construct the residual CUSUM of squares test. We conduct Monte Carlo simulation experiments to illustrate its validity with various linear and nonlinear GARCH models. A real data analysis with the S&amp;P 500 index, Korea Composite Stock Price Index (KOSPI), and Korean won/U.S. dollar (KRW/USD) exchange rate datasets is provided to exhibit its scope of application.

]]>Entropy doi: 10.3390/e22050577

Authors: Serena Di Santo Vanni De Luca Alessio Isaja Sara Andreetta

Recently, there has been increasing interest in techniques for enhancing working memory (WM), casting a new light on the classical picture of a rigid system. One reason is that WM performance has been associated with intelligence and reasoning, while its impairment showed correlations with cognitive deficits, hence the possibility of training it is highly appealing. However, results on WM changes following training are controversial, leaving it unclear whether it can really be potentiated. This study aims at assessing changes in WM performance by comparing it with and without training by a professional mnemonist. Two groups, experimental and control, participated in the study, organized in two phases. In the morning, both groups were familiarized with stimuli through an N-back task, and then attended a 2-hour lecture. For the experimental group, the lecture, given by the mnemonist, introduced memory encoding techniques; for the control group, it was a standard academic lecture about memory systems. In the afternoon, both groups were administered five tests, in which they had to remember the position of 16 items, when asked in random order. The results show much better performance in trained subjects, indicating the need to consider such possibility of enhancement, alongside general information-theoretic constraints, when theorizing about WM span.

]]>Entropy doi: 10.3390/e22050576

Authors: Ahmed Zeghid A.Imtiaz Sharma Chehri Fortier

In this paper, we present a new algorithm to generate two-dimensional (2D) permutation vectors&rsquo; (PV) code for incoherent optical code division multiple access (OCDMA) system to suppress multiple access interference (MAI) and system complexity. The proposed code design approach is based on wavelength-hopping time-spreading (WHTS) technique for code generation. All possible combinations of PV code sets were attained by employing all permutations of the vectors with repetition of each vector weight (W) times. Further, 2D-PV code set was constructed by combining two code sequences of the 1D-PV code. The transmitter-receiver architecture of 2D-PV code-based WHTS OCDMA system is presented. Results indicated that the 2D-PV code provides increased cardinality by eliminating phase-induced intensity noise (PIIN) effects and multiple user data can be transmitted with minimum likelihood of interference. Simulation results validated the proposed system for an agreeable bit error rate (BER) of 10&minus;9.

]]>Entropy doi: 10.3390/e22050575

Authors: Nadia Alshahwan Earl T. Barr David Clark George Danezis Héctor D. Menéndez

Malware concealment is the predominant strategy for malware propagation. Black hats create variants of malware based on polymorphism and metamorphism. Malware variants, by definition, share some information. Although the concealment strategy alters this information, there are still patterns on the software. Given a zoo of labelled malware and benign-ware, we ask whether a suspect program is more similar to our malware or to our benign-ware. Normalized Compression Distance (NCD) is a generic metric that measures the shared information content of two strings. This measure opens a new front in the malware arms race, one where the countermeasures promise to be more costly for malware writers, who must now obfuscate patterns as strings qua strings, without reference to execution, in their variants. Our approach classifies disk-resident malware with 97.4% accuracy and a false positive rate of 3%. We demonstrate that its accuracy can be improved by combining NCD with the compressibility rates of executables using decision forests, paving the way for future improvements. We demonstrate that malware reported within a narrow time frame of a few days is more homogeneous than malware reported over two years, but that our method still classifies the latter with 95.2% accuracy and a 5% false positive rate. Due to its use of compression, the time and computation cost of our method is nontrivial. We show that simple approximation techniques can improve its running time by up to 63%. We compare our results to the results of applying the 59 anti-malware programs used on the VirusTotal website to our malware. Our approach outperforms each one used alone and matches that of all of them used collectively.

]]>Entropy doi: 10.3390/e22050571

Authors: Yuang Wang Shanhua Zou Yun Mao Ying Guo

Underwater quantum key distribution (QKD) is tough but important for modern underwater communications in an insecure environment. It can guarantee secure underwater communication between submarines and enhance safety for critical network nodes. To enhance the performance of continuous-variable quantum key distribution (CVQKD) underwater in terms of maximal transmission distance and secret key rate as well, we adopt measurement-device-independent (MDI) quantum key distribution with the zero-photon catalysis (ZPC) performed at the emitter of one side, which is the ZPC-based MDI-CVQKD. Numerical simulation shows that the ZPC-involved scheme, which is a Gaussian operation in essence, works better than the single photon subtraction (SPS)-involved scheme in the extreme asymmetric case. We find that the transmission of the ZPC-involved scheme is longer than that of the SPS-involved scheme. In addition, we consider the effects of temperature, salinity and solar elevation angle on the system performance in pure seawater. The maximal transmission distance decreases with the increase of temperature and the decrease of sunlight elevation angle, while it changes little over a broad range of salinity.

]]>Entropy doi: 10.3390/e22050572

Authors: Todd K. Moon Jacob H. Gunther

Estimating the parameters of the autoregressive (AR) random process is a problem that has been well-studied. In many applications, only noisy measurements of AR process are available. The effect of the additive noise is that the system can be modeled as an AR model with colored noise, even when the measurement noise is white, where the correlation matrix depends on the AR parameters. Because of the correlation, it is expedient to compute using multiple stacked observations. Performing a weighted least-squares estimation of the AR parameters using an inverse covariance weighting can provide significantly better parameter estimates, with improvement increasing with the stack depth. The estimation algorithm is essentially a vector RLS adaptive filter, with time-varying covariance matrix. Different ways of estimating the unknown covariance are presented, as well as a method to estimate the variances of the AR and observation noise. The notation is extended to vector autoregressive (VAR) processes. Simulation results demonstrate performance improvements in coefficient error and in spectrum estimation.

]]>Entropy doi: 10.3390/e22050570

Authors: Itay Azizi Yitzhak Rabin

We use Langevin dynamics simulations to study dense 2d systems of particles with both size and energy polydispersity. We compare two types of bidisperse systems which differ in the correlation between particle size and interaction parameters: in one system big particles have high interaction parameters and small particles have low interaction parameters, while in the other system the situation is reversed. We study the different phases of the two systems and compare them to those of a system with size but not energy bidispersity. We show that, depending on the strength of interaction between big and small particles, cooling to low temperatures yields either homogeneous glasses or mosaic crystals. We find that systems with low mixing interaction, undergo partial freezing of one of the components at intermediate temperatures, and that while this phenomenon is energy-driven in both size and energy bidisperse systems, it is controlled by entropic effects in systems with size bidispersity only.

]]>Entropy doi: 10.3390/e22050574

Authors: Constantinos Papadimitriou Georgios Balasis Adamantia Zoe Boutsi Ioannis A. Daglis Omiros Giannakis Anastasios Anastasiadis Paola Michelis Giuseppe Consolini

The continuously expanding toolbox of nonlinear time series analysis techniques has recently highlighted the importance of dynamical complexity to understand the behavior of the complex solar wind&ndash;magnetosphere&ndash;ionosphere&ndash;thermosphere coupling system and its components. Here, we apply new such approaches, mainly a series of entropy methods to the time series of the Earth&rsquo;s magnetic field measured by the Swarm constellation. We show successful applications of methods, originated from information theory, to quantitatively study complexity in the dynamical response of the topside ionosphere, at Swarm altitudes, focusing on the most intense magnetic storm of solar cycle 24, that is, the St. Patrick&rsquo;s Day storm, which occurred in March 2015. These entropy measures are utilized for the first time to analyze data from a low-Earth orbit (LEO) satellite mission flying in the topside ionosphere. These approaches may hold great potential for improved space weather nowcasts and forecasts.

]]>Entropy doi: 10.3390/e22050573

Authors: Francesca Tria Irene Crimaldi Giacomo Aletti Vito D. P. Servedio

Taylor&rsquo;s law quantifies the scaling properties of the fluctuations of the number of innovations occurring in open systems. Urn-based modeling schemes have already proven to be effective in modeling this complex behaviour. Here, we present analytical estimations of Taylor&rsquo;s law exponents in such models, by leveraging on their representation in terms of triangular urn models. We also highlight the correspondence of these models with Poisson&ndash;Dirichlet processes and demonstrate how a non-trivial Taylor&rsquo;s law exponent is a kind of universal feature in systems related to human activities. We base this result on the analysis of four collections of data generated by human activity: (i) written language (from a Gutenberg corpus); (ii) an online music website (Last.fm); (iii) Twitter hashtags; (iv) an online collaborative tagging system (Del.icio.us). While Taylor&rsquo;s law observed in the last two datasets agrees with the plain model predictions, we need to introduce a generalization to fully characterize the behaviour of the first two datasets, where temporal correlations are possibly more relevant. We suggest that Taylor&rsquo;s law is a fundamental complement to Zipf&rsquo;s and Heaps&rsquo; laws in unveiling the complex dynamical processes underlying the evolution of systems featuring innovation.

]]>Entropy doi: 10.3390/e22050569

Authors: Zufeng Fu Daoyun Xu

Unique k-SAT is the promised version of k-SAT where the given formula has 0 or 1 solution and is proved to be as difficult as the general k-SAT. For any k &ge; 3, s &ge; f (k,d) and (s + d)/2 &gt; k &minus; 1, a parsimonious reduction from k-CNF to d-regular (k,s)-CNF is given. Here regular (k,s)-CNF is a subclass of CNF, where each clause of the formula has exactly k distinct variables, and each variable occurs in exactly s clauses. A d-regular (k,s)-CNF formula is a regular (k,s)-CNF formula, in which the absolute value of the difference between positive and negative occurrences of every variable is at most a nonnegative integer d. We prove that for all k &ge; 3, f (k, d) &le; u(k,d) + 1 and f (k,d + 1) &le; u(k,d). The critical function f (k,d) is the maximal value of s, such that every d-regular (k,s)-CNF formula is satisfiable. In this study, u(k,d) denotes the minimal value of s such that there exists a uniquely satisfiable d-regular (k,s)-CNF formula. We further show that for s &ge; f (k,d)+1 and (s&nbsp;+&nbsp;d)/2 &gt; k&minus;1, there exists a uniquely satisfiable d-regular (k,s + 1)-CNF formula. Moreover, for k &ge; 7, we have that&nbsp;u(k, d) &le; f (k,d) + 1.

]]>Entropy doi: 10.3390/e22050568

Authors: Ian Durham

In this article, I develop a formal model of free will for complex systems based on emergent properties and adaptive selection. The model is based on a process ontology in which a free choice is a singular process that takes a system from one macrostate to another. I quantify the model by introducing a formal measure of the `freedom&rsquo; of a singular choice. The `free will&rsquo; of a system, then, is emergent from the aggregate freedom of the choice processes carried out by the system. The focus in this model is on the actual choices themselves viewed in the context of processes. That is, the nature of the system making the choices is not considered. Nevertheless, my model does not necessarily conflict with models that are based on internal properties of the system. Rather it takes a behavioral approach by focusing on the externalities of the choice process.

]]>Entropy doi: 10.3390/e22050567

Authors: Aqib Ali Salman Qadri Wali Khan Mashwani Wiyada Kumam Poom Kumam Samreen Naeem Atila Goktas Farrukh Jamal Christophe Chesneau Sania Anam Muhammad Sulaiman

The object of this study was to demonstrate the ability of machine learning (ML) methods for the segmentation and classification of diabetic retinopathy (DR). Two-dimensional (2D) retinal fundus (RF) images were used. The datasets of DR&mdash;that is, the mild, moderate, non-proliferative, proliferative, and normal human eye ones&mdash;were acquired from 500 patients at Bahawal Victoria Hospital (BVH), Bahawalpur, Pakistan. Five hundred RF datasets (sized 256 &times; 256) for each DR stage and a total of 2500 (500 &times; 5) datasets of the five DR stages were acquired. This research introduces the novel clustering-based automated region growing framework. For texture analysis, four types of features&mdash;histogram (H), wavelet (W), co-occurrence matrix (COM) and run-length matrix (RLM)&mdash;were extracted, and various ML classifiers were employed, achieving 77.67%, 80%, 89.87%, and 96.33% classification accuracies, respectively. To improve classification accuracy, a fused hybrid-feature dataset was generated by applying the data fusion approach. From each image, 245 pieces of hybrid feature data (H, W, COM, and RLM) were observed, while 13 optimized features were selected after applying four different feature selection techniques, namely Fisher, correlation-based feature selection, mutual information, and probability of error plus average correlation. Five ML classifiers named sequential minimal optimization (SMO), logistic (Lg), multi-layer perceptron (MLP), logistic model tree (LMT), and simple logistic (SLg) were deployed on selected optimized features (using 10-fold cross-validation), and they showed considerably high classification accuracies of 98.53%, 99%, 99.66%, 99.73%, and 99.73%, respectively.

]]>Entropy doi: 10.3390/e22050566

Authors: Mariusz Matusiak

In this article, some practical software optimization methods for implementations of fractional order backward difference, sum, and differintegral operator based on Gr&uuml;nwald&ndash;Letnikov definition are presented. These numerical algorithms are of great interest in the context of the evaluation of fractional-order differential equations in embedded systems, due to their more convenient form compared to Caputo and Riemann&ndash;Liouville definitions or Laplace transforms, based on the discrete convolution operation. A well-known difficulty relates to the non-locality of the operator, implying continually increasing numbers of processed samples, which may reach the limits of available memory or lead to exceeding the desired computation time. In the study presented here, several promising software optimization techniques were analyzed and tested in the evaluation of the variable fractional-order backward difference and derivative on two different Arm&reg; Cortex&reg;-M architectures. Reductions in computation times of up to 75% and 87% were achieved compared to the initial implementation, depending on the type of Arm&reg; core.

]]>Entropy doi: 10.3390/e22050565

Authors: Vyacheslav I. Yukalov

The review is devoted to two important quantities characterizing many-body systems, order indices and the measure of entanglement production. Order indices describe the type of order distinguishing statistical systems. Contrary to the order parameters characterizing systems in the thermodynamic limit and describing long-range order, the order indices are applicable to finite systems and classify all types of orders, including long-range, mid-range, and short-range orders. The measure of entanglement production quantifies the amount of entanglement produced in a many-partite system by a quantum operation. Despite that the notions of order indices and entanglement production seem to be quite different, there is an intimate relation between them, which is emphasized in the review.

]]>Entropy doi: 10.3390/e22050564

Authors: Takazumi Matsumoto Jun Tani

It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories.

]]>Entropy doi: 10.3390/e22050563

Authors: Tomohiro Nishiyama Igal Sason

This paper is focused on a study of integral relations between the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics, a study of the implications of these relations, their information-theoretic applications, and some generalizations pertaining to the rich class of f-divergences. Applications that are studied in this paper refer to lossless compression, the method of types and large deviations, strong data&ndash;processing inequalities, bounds on contraction coefficients and maximal correlation, and the convergence rate to stationarity of a type of discrete-time Markov chains.

]]>Entropy doi: 10.3390/e22050562

Authors: Adeeb Noor Ahmed Barnawi Redhwan Nour Abdullah Assiri Mohamed El-Beltagy

The population models allow for a better understanding of the dynamical interactions with the environment and hence can provide a way for understanding the population changes. They are helpful in studying the biological invasions, environmental conservation and many other applications. These models become more complicated when accounting for the stochastic and/or random variations due to different sources. In the current work, a spectral technique is suggested to analyze the stochastic population model with random parameters. The model contains mixed sources of uncertainties, noise and uncertain parameters. The suggested algorithm uses the spectral decompositions for both types of randomness. The spectral techniques have the advantages of high rates of convergence. A deterministic system is derived using the statistical properties of the random bases. The classical analytical and/or numerical techniques can be used to analyze the deterministic system and obtain the solution statistics. The technique presented in the current work is applicable to many complex systems with both stochastic and random parameters. It has the advantage of separating the contributions due to different sources of uncertainty. Hence, the sensitivity index of any uncertain parameter can be evaluated. This is a clear advantage compared with other techniques used in the literature.

]]>Entropy doi: 10.3390/e22050561

Authors: Atanu Chatterjee Nicholas Mears Yash Yadati Germano S. Iannacchione

Soft-matter systems when driven out of equilibrium often give rise to structures that usually lie in between the macroscopic scale of the material and microscopic scale of its constituents. In this paper we review three such systems, the two-dimensional square-lattice Ising model, the Kuramoto model and the Rayleigh&ndash;B&eacute;nard convection system which when driven out of equilibrium give rise to emergent spatio-temporal order through self-organization. A common feature of these systems is that the entities that self-organize are coupled to one another in some way, either through local interactions or through a continuous media. Therefore, the general nature of non-equilibrium fluctuations of the intrinsic variables in these systems are found to follow similar trends as order emerges. Through this paper, we attempt to find connections between these systems, and systems in general which give rise to emergent order when driven out of equilibrium. This study, thus acts as a foundation for modeling a complex system as a two-state system, where the states: order and disorder can coexist as the system is driven away from equilibrium.

]]>Entropy doi: 10.3390/e22050560

Authors: Shrihari Vasudevan

This paper demonstrates a novel approach to training deep neural networks using a Mutual Information (MI)-driven, decaying Learning Rate (LR), Stochastic Gradient Descent (SGD) algorithm. MI between the output of the neural network and true outcomes is used to adaptively set the LR for the network, in every epoch of the training cycle. This idea is extended to layer-wise setting of LR, as MI naturally provides a layer-wise performance metric. A LR range test determining the operating LR range is also proposed. Experiments compared this approach with popular alternatives such as gradient-based adaptive LR algorithms like Adam, RMSprop, and LARS. Competitive to better accuracy outcomes obtained in competitive to better time, demonstrate the feasibility of the metric and approach.

]]>Entropy doi: 10.3390/e22050559

Authors: Andrei Khrennikov

During recent years our society has often been exposed to coherent information waves of high amplitudes. These are waves of huge social energy. Often they are of destructive character, a kind of information tsunami. However, they can also carry positive improvements in human society, as waves of decision-making matching rational recommendations of societal institutes. The main distinguishing features of these waves are their high amplitude, coherence (homogeneous character of social actions generated by them), and short time needed for their generation and relaxation. Such waves can be treated as large-scale exhibitions of the bandwagon effect. We show that this socio-psychic phenomenon can be modeled based on the recently developed social laser theory. This theory can be used to model stimulated amplification of coherent social actions. &ldquo;Actions&rdquo; are treated very generally, from mass protests to votes and other collective decisions, such as, e.g., acceptance (often unconscious) of some societal recommendations. In this paper, we concentrate on the theory of laser resonators, physical vs. social. For the latter, we analyze in detail the functioning of Internet-based echo chambers. Their main purpose is increasing of the power of the quantum information field as well as its coherence. Of course, the bandwagon effect is well known and well studied in social psychology. However, social laser theory gives the possibility to model it by using general formalism of quantum field theory. The paper contains the minimum of mathematics and it can be read by researchers working in psychological, cognitive, social, and political sciences; it might also be interesting for experts in information theory and artificial intelligence.

]]>Entropy doi: 10.3390/e22050558

Authors: Alexander S. Abyzov Jürn W. P. Schmelzer Vladimir M. Fokin Edgar D. Zanotto

Crystal nucleation can be described by a set of kinetic equations that appropriately account for both the thermodynamic and kinetic factors governing this process. The mathematical analysis of this set of equations allows one to formulate analytical expressions for the basic characteristics of nucleation, i.e., the steady-state nucleation rate and the steady-state cluster-size distribution. These two quantities depend on the work of formation, &Delta; G ( n ) = &minus; n &Delta; &mu; + &gamma; n 2 / 3 , of crystal clusters of size n and, in particular, on the work of critical cluster formation, &Delta; G ( n c ) . The first term in the expression for &Delta; G ( n ) describes changes in the bulk contributions (expressed by the chemical potential difference, &Delta; &mu; ) to the Gibbs free energy caused by cluster formation, whereas the second one reflects surface contributions (expressed by the surface tension, &sigma; : &gamma; = &Omega; d 0 2 &sigma; , &Omega; = 4 &pi; ( 3 / 4 &pi; ) 2 / 3 , where d 0 is a parameter describing the size of the particles in the liquid undergoing crystallization), n is the number of particles (atoms or molecules) in a crystallite, and n = n c defines the size of the critical crystallite, corresponding to the maximum (in general, a saddle point) of the Gibbs free energy, G. The work of cluster formation is commonly identified with the difference between the Gibbs free energy of a system containing a cluster with n particles and the homogeneous initial state. For the formation of a &ldquo;cluster&rdquo; of size n = 1 , no work is required. However, the commonly used relation for &Delta; G ( n ) given above leads to a finite value for n = 1 . By this reason, for a correct determination of the work of cluster formation, a self-consistency correction should be introduced employing instead of &Delta; G ( n ) an expression of the form &Delta; G &tilde; ( n ) = &Delta; G ( n ) &minus; &Delta; G ( 1 ) . Such self-consistency correction is usually omitted assuming that the inequality &Delta; G ( n ) ≫ &Delta; G ( 1 ) holds. In the present paper, we show that: (i) This inequality is frequently not fulfilled in crystal nucleation processes. (ii) The form and the results of the numerical solution of the set of kinetic equations are not affected by self-consistency corrections. However, (iii) the predictions of the analytical relations for the steady-state nucleation rate and the steady-state cluster-size distribution differ considerably in dependence of whether such correction is introduced or not. In particular, neglecting the self-consistency correction overestimates the work of critical cluster formation and leads, consequently, to far too low theoretical values for the steady-state nucleation rates. For the system studied here as a typical example (lithium disilicate, Li 2 O &middot; 2 SiO 2 ), the resulting deviations from the correct values may reach 20 orders of magnitude. Consequently, neglecting self-consistency corrections may result in severe errors in the interpretation of experimental data if, as it is usually done, the analytical relations for the steady-state nucleation rate or the steady-state cluster-size distribution are employed for their determination.

]]>Entropy doi: 10.3390/e22050557

Authors: Yirui Zhang Konrad Giżyński Anna Maciołek Robert Hołyst

We study a quantity T defined as the energy U, stored in non-equilibrium steady states (NESS) over its value in equilibrium U 0 , &Delta; U = U &minus; U 0 divided by the heat flow J U going out of the system. A recent study suggests that T is minimized in steady states (Phys.Rev.E.99, 042118 (2019)). We evaluate this hypothesis using an ideal gas system with three methods of energy delivery: from a uniformly distributed energy source, from an external heat flow through the surface, and from an external matter flow. By introducing internal constraints into the system, we determine T with and without constraints and find that T is the smallest for unconstrained NESS. We find that the form of the internal energy in the studied NESS follows U = U 0 &lowast; f ( J U ) . In this context, we discuss natural variables for NESS, define the embedded energy (an analog of Helmholtz free energy for NESS), and provide its interpretation.

]]>Entropy doi: 10.3390/e22050556

Authors: Sergei Koltcov Vera Ignatenko

In practice, to build a machine learning model of big data, one needs to tune model parameters. The process of parameter tuning involves extremely time-consuming and computationally expensive grid search. However, the theory of statistical physics provides techniques allowing us to optimize this process. The paper shows that a function of the output of topic modeling demonstrates self-similar behavior under variation of the number of clusters. Such behavior allows using a renormalization technique. A combination of renormalization procedure with the Renyi entropy approach allows for quick searching of the optimal number of topics. In this paper, the renormalization procedure is developed for the probabilistic Latent Semantic Analysis (pLSA), and the Latent Dirichlet Allocation model with variational Expectation&ndash;Maximization algorithm (VLDA) and the Latent Dirichlet Allocation model with granulated Gibbs sampling procedure (GLDA). The experiments were conducted on two test datasets with a known number of topics in two different languages and on one unlabeled test dataset with an unknown number of topics. The paper shows that the renormalization procedure allows for finding an approximation of the optimal number of topics at least 30 times faster than the grid search without significant loss of quality.

]]>Entropy doi: 10.3390/e22050555

Authors: Rafał Brociek Agata Chmielowska Damian Słota

This paper presents the algorithms for solving the inverse problems on models with the fractional derivative. The presented algorithm is based on the Real Ant Colony Optimization algorithm. In this paper, the examples of the algorithm application for the inverse heat conduction problem on the model with the fractional derivative of the Caputo type is also presented. Based on those examples, the authors are comparing the proposed algorithm with the iteration method presented in the paper: Zhang, Z. An undetermined coefficient problem for a fractional diffusion equation. Inverse Probl. 2016, 32.

]]>Entropy doi: 10.3390/e22050554

Authors: Junbeom Kim Seok-Hwan Park

Despite the potential benefits of reducing system costs and improving spectral efficiency, it is challenging to implement cloud radio access network (C-RAN) systems due to the performance degradation caused by finite-capacity fronthaul links and inter-cluster interference signals. This work studies inter-cluster cooperative reception for the uplink of a two-cluster C-RAN system, where two nearby clusters interfere with each other on the uplink access channel. The radio units (RUs) of two clusters forward quantized and compressed version of the uplink received signals to the serving baseband processing units (BBUs) via finite-capacity fronthaul links. The BBUs of the clusters exchange the received fronthaul signals via finite-capacity backhaul links with the purpose of mitigating inter-cluster interference signals. Optimization of conventional cooperation scheme, in which each RU produces a single quantized signal, requires an exhaustive discrete search of exponentially increasing search size with respect to the number of RUs. To resolve this issue, we propose an improved inter-BBU, or inter-cluster, cooperation strategy based on layered compression, where each RU produces two descriptions, of which only one description is forwarded to the neighboring BBU on the backhaul links. We discuss the optimization of the proposed inter-cluster cooperation scheme, and validate the performance gains of the proposed scheme via numerical results.

]]>Entropy doi: 10.3390/e22050553

Authors: Roumen Borisov Zlatinka I. Dimitrova Nikolay K. Vitanov

We study flow of substance in a channel of network which consists of nodes of network and edges which connect these nodes and form ways for motion of substance. The channel can have arbitrary number of arms and each arm can contain arbitrary number of nodes. The flow of substance is modeled by a system of ordinary differential equations. We discuss first a model for a channel which arms contain infinite number of nodes each. For stationary regime of motion of substance in such a channel we obtain probability distributions connected to distribution of substance in any of channel&rsquo;s arms and in entire channel. Obtained distributions are not discussed by other authors and can be connected to Waring distribution. Next, we discuss a model for flow of substance in a channel which arms contain finite number of nodes each. We obtain probability distributions connected to distribution of substance in the nodes of the channel for stationary regime of flow of substance. These distributions are also new and we calculate corresponding information measure and Shannon information measure for studied kind of flow of substance.

]]>Entropy doi: 10.3390/e22050552

Authors: Parr Sajid Friston

The segregation of neural processing into distinct streams has been interpreted by someas evidence in favour of a modular view of brain function. This implies a set of specialised &lsquo;modules&rsquo;,each of which performs a specific kind of computation in isolation of other brain systems, beforesharing the result of this operation with other modules. In light of a modern understanding ofstochastic non-equilibrium systems, like the brain, a simpler and more parsimonious explanationpresents itself. Formulating the evolution of a non-equilibrium steady state system in terms of itsdensity dynamics reveals that such systems appear on average to perform a gradient ascent on theirsteady state density. If this steady state implies a sufficiently sparse conditional independencystructure, this endorses a mean-field dynamical formulation. This decomposes the density over allstates in a system into the product of marginal probabilities for those states. This factorisation lendsthe system a modular appearance, in the sense that we can interpret the dynamics of each factorindependently. However, the argument here is that it is factorisation, as opposed to modularisation,that gives rise to the functional anatomy of the brain or, indeed, any sentient system. In thefollowing, we briefly overview mean-field theory and its applications to stochastic dynamicalsystems. We then unpack the consequences of this factorisation through simple numericalsimulations and highlight the implications for neuronal message passing and the computationalarchitecture of sentience

]]>Entropy doi: 10.3390/e22050551

Authors: Michael Silberstein William Stuckey

Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious experience and fundamental physics are most deeply related. Any attempt to mathematically and formally model conscious experience and its relationship to physics must begin with some metaphysical assumption in mind about the nature of conscious experience, the nature of matter and the nature of the relationship between them. These days the most prominent metaphysical fixed points are strong emergence or some variant of panpsychism. In this paper we will detail another distinct metaphysical starting point known as neutral monism. In particular, we will focus on a variant of the neutral monism of William James and Bertrand Russell. Rather than starting with physics as fundamental, as both strong emergence and panpsychism do in their own way, our goal is to suggest how one might derive fundamental physics from neutral monism. Thus, starting with two axioms grounded in our characterization of neutral monism, we will sketch out a derivation of and explanation for some key features of relativity and quantum mechanics that suggest a unity between those two theories that is generally unappreciated. Our mode of explanation throughout will be of the principle as opposed to constructive variety in something like Einstein&rsquo;s sense of those terms. We will argue throughout that a bias towards property dualism and a bias toward reductive dynamical and constructive explanation lead to the hard problem and the explanatory gap in consciousness studies, and lead to serious unresolved problems in fundamental physics, such as the measurement problem and the mystery of entanglement in quantum mechanics and lack of progress in producing an empirically well-grounded theory of quantum gravity. We hope to show that given our take on neutral monism and all that follows from it, the aforementioned problems can be satisfactorily resolved leaving us with a far more intuitive and commonsense model of the relationship between conscious experience and physics.

]]>Entropy doi: 10.3390/e22050550

Authors: Jinn-Liang Liu Bob Eisenberg

We have developed a molecular mean-field theory&mdash;fourth-order Poisson&ndash;Nernst&ndash;Planck&ndash;Bikerman theory&mdash;for modeling ionic and water flows in biological ion channels by treating ions and water molecules of any volume and shape with interstitial voids, polarization of water, and ion-ion and ion-water correlations. The theory can also be used to study thermodynamic and electrokinetic properties of electrolyte solutions in batteries, fuel cells, nanopores, porous media including cement, geothermal brines, the oceanic system, etc. The theory can compute electric and steric energies from all atoms in a protein and all ions and water molecules in a channel pore while keeping electrolyte solutions in the extra- and intracellular baths as a continuum dielectric medium with complex properties that mimic experimental data. The theory has been verified with experiments and molecular dynamics data from the gramicidin A channel, L-type calcium channel, potassium channel, and sodium/calcium exchanger with real structures from the Protein Data Bank. It was also verified with the experimental or Monte Carlo data of electric double-layer differential capacitance and ion activities in aqueous electrolyte solutions. We give an in-depth review of the literature about the most novel properties of the theory, namely Fermi distributions of water and ions as classical particles with excluded volumes and dynamic correlations that depend on salt concentration, composition, temperature, pressure, far-field boundary conditions etc. in a complex and complicated way as reported in a wide range of experiments. The dynamic correlations are self-consistent output functions from a fourth-order differential operator that describes ion-ion and ion-water correlations, the dielectric response (permittivity) of ionic solutions, and the polarization of water molecules with a single correlation length parameter.

]]>Entropy doi: 10.3390/e22050549

Authors: Faisal Ahmed Khan Tariq Masood Ali Khan Ali Najah Ahmed Haitham Abdulmohsin Afan Mohsen Sherif Ahmed Sefelnasr Ahmed El-Shafie

In this study, the analysis of the extreme sea level was carried out by using 10 years (2007&ndash;2016) of hourly tide gauge data of Karachi port station along the Pakistan coast. Observations revealed that the magnitudes of the tides usually exceeded the storm surges at this station. The main observation for this duration and the subsequent analysis showed that in June 2007 a tropical Cyclone &ldquo;Yemyin&rdquo; hit the Pakistan coast. The joint probability method (JPM) and the annual maximum method (AMM) were used for statistical analysis to find out the return periods of different extreme sea levels. According to the achieved results, the AMM and JPM methods erre compatible with each other for the Karachi coast and remained well within the range of 95% confidence. For the JPM method, the highest astronomical tide (HAT) of the Karachi coast was considered as the threshold and the sea levels above it were considered extreme sea levels. The 10 annual observed sea level maxima, in the recent past, showed an increasing trend for extreme sea levels. In the study period, the increment rates of 3.6 mm/year and 2.1 mm/year were observed for mean sea level and extreme sea level, respectively, along the Karachi coast. Tidal analysis, for the Karachi tide gauge data, showed less dependency of the extreme sea levels on the non-tidal residuals. By applying the Merrifield criteria of mean annual maximum water level ratio, it was found that the Karachi coast was tidally dominated and the non-tidal residual contribution was just 10%. The examination of the highest water level event (13 June 2014) during the study period, further favored the tidal dominance as compared to the non-tidal component along the Karachi coast.

]]>Entropy doi: 10.3390/e22050547

Authors: Adnan Chen Yuan Kisi El-Shafie Kuriqi Ikram

The study investigates the potential of two new machine learning methods, least-square support vector regression with a gravitational search algorithm (LSSVR-GSA) and the dynamic evolving neural-fuzzy inference system (DENFIS), for modeling reference evapotranspiration (ETo) using limited data. The results of the new methods are compared with the M5 model tree (M5RT) approach. Previous values of temperature data and extraterrestrial radiation information obtained from three stations, in China, are used as inputs to the models. The estimation exactness of the models is measured by three statistics: root mean square error, mean absolute error, and determination coefficient. According to the results, the temperature or extraterrestrial radiation-based LSSVR-GSA models perform superiorly to the DENFIS and M5RT models in terms of estimating monthly ETo. However, in some cases, a slight difference was found between the LSSVR-GSA and DENFIS methods. The results indicate that better prediction accuracy may be obtained using only extraterrestrial radiation information for all three methods. The prediction accuracy of the models is not generally improved by including periodicity information in the inputs. Using optimum air temperature and extraterrestrial radiation inputs together generally does not increase the accuracy of the applied methods in the estimation of monthly ETo.

]]>Entropy doi: 10.3390/e22050546

Authors: Hu Hu Jiang Peng

The timing of an initial public offering (IPO) is a complex dynamic game in the stock market. Based on a dynamic game model with the real option, this paper investigates the relationship between pricing constraint and the complexity of IPO timing in the stock market, and further discusses its mechanism. The model shows that the IPO pricing constraint reduced the exercise value of the real option of IPO timing, thus restricting the enterprise's independent timing and promoting an earlier listing. The IPO price limit has a stronger effect on high-trait enterprises, such as technology enterprises. Lowering the upper limit of the pricing constraint increases the probability that enterprises are bound by this restriction during IPO. A high discount cost and stock-market volatility are also reasons for early listing. This paper suggests a theoretical explanation for the mechanism of the pricing constraint on IPO timing in the complex market environment, which is an extension of IPO timing theory, itself an interpretation of the IPO behavior of Chinese enterprises. These findings provide new insights in understanding the complexity of IPOs in relation to the Chinese stock market.

]]>Entropy doi: 10.3390/e22050548

Authors: Thang Manh Hoang Safwan El Assad

Most of chaos-based cryptosystems utilize stationary dynamics of chaos for the permutation and diffusion, and many of those are successfully attacked. In this paper, novel models of the image permutation and diffusion are proposed, in which chaotic map is perturbed at bit level on state variables, on control parameters or on both. Amounts of perturbation are initially the coordinate of pixels in the permutation, the value of ciphered word in the diffusion, and then a value extracted from state variables in every iteration. Under the persistent perturbation, dynamics of chaotic map is nonstationary and dependent on the image content. The simulation results and analyses demonstrate the effectiveness of the proposed models by means of the good statistical properties of transformed image obtained after just only a single round.

]]>Entropy doi: 10.3390/e22050545

Authors: Gostkowski Gajowniczek

Due to various regulations (e.g., the Basel III Accord), banks need to keep a specified amount of capital to reduce the impact of their insolvency. This equity can be calculated using, e.g., the Internal Rating Approach, enabling institutions to develop their own statistical models. In this regard, one of the most important parameters is the loss given default, whose correct estimation may lead to a healthier and riskless allocation of the capital. Unfortunately, since the loss given default distribution is a bimodal application of the modeling methods (e.g., ordinary least squares or regression trees), aiming at predicting the mean value is not enough. Bimodality means that a distribution has two modes and has a large proportion of observations with large distances from the middle of the distribution; therefore, to overcome this fact, more advanced methods are required. To this end, to model the entire loss given default distribution, in this article we present the weighted quantile Regression Forest algorithm, which is an ensemble technique. We evaluate our methodology over a dataset collected by one of the biggest Polish banks. Through our research, we show that weighted quantile Regression Forests outperform &ldquo;single&rdquo; state-of-the-art models in terms of their accuracy and the stability.

]]>Entropy doi: 10.3390/e22050544

Authors: Emre Ozfatura Sennur Ulukus Deniz Gündüz

When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two drawbacks: over-computation due to inaccurate prediction of the straggling behavior, and under-utilization due to discarding partial computations carried out by stragglers. To overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and propose novel straggler avoidance techniques for both coded computation and coded communication with MMC. We analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency. Furthermore, we identify the advantages and disadvantages of these designs in different settings through extensive simulations, both model-based and real implementation on Amazon EC2 servers, and demonstrate that proposed schemes with MMC can help improve upon existing straggler avoidance schemes.

]]>Entropy doi: 10.3390/e22050542

Authors: Camargo

Novel measures of symbol dominance (dC1 and dC2), symbol diversity (DC1 = N (1 &minus; dC1) and DC2 = N (1 &minus; dC2)), and information entropy (HC1 = log2 DC1 and HC2 = log2 DC2) are derived from Lorenz-consistent statistics that I had previously proposed to quantify dominance and diversity in ecology. Here, dC1 refers to the average absolute difference between the relative abundances of dominant and subordinate symbols, with its value being equivalent to the maximum vertical distance from the Lorenz curve to the 45-degree line of equiprobability; dC2 refers to the average absolute difference between all pairs of relative symbol abundances, with its value being equivalent to twice the area between the Lorenz curve and the 45-degree line of equiprobability; N is the number of different symbols or maximum expected diversity. These Lorenz-consistent statistics are compared with statistics based on Shannon&rsquo;s entropy and R&eacute;nyi&rsquo;s second-order entropy to show that the former have better mathematical behavior than the latter. The use of dC1, DC1, and HC1 is particularly recommended, as only changes in the allocation of relative abundance between dominant (pd &gt; 1/N) and subordinate (ps &lt; 1/N) symbols are of real relevance for probability distributions to achieve the reference distribution (pi = 1/N) or to deviate from it.

]]>Entropy doi: 10.3390/e22050543

Authors: Konrad Furmańczyk Wojciech Rejchel

In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic regression to the classification data, which possibly do not follow the logistic model. The second method is even more radical: we just treat class labels of objects as they were numbers and apply penalized linear regression. In this paper, we investigate thoroughly these two approaches and provide conditions, which guarantee that they are successful in prediction and variable selection. Our results hold even if the number of predictors is much larger than the sample size. The paper is completed by the experimental results.

]]>Entropy doi: 10.3390/e22050541

Authors: Georgios Nicolaou George Livadiotis

The velocities of space plasma particles often follow kappa distribution functions, which have characteristic high energy tails. The tails of these distributions are associated with low particle flux and, therefore, it is challenging to precisely resolve them in plasma measurements. On the other hand, the accurate determination of kappa distribution functions within a broad range of energies is crucial for the understanding of physical mechanisms. Standard analyses of the plasma observations determine the plasma bulk parameters from the statistical moments of the underlined distribution. It is important, however, to also quantify the uncertainties of the derived plasma bulk parameters, which determine the confidence level of scientific conclusions. We investigate the determination of the plasma bulk parameters from observations by an ideal electrostatic analyzer. We derive simple formulas to estimate the statistical uncertainties of the calculated bulk parameters. We then use the forward modelling method to simulate plasma observations by a typical top-hat electrostatic analyzer. We analyze the simulated observations in order to derive the plasma bulk parameters and their uncertainties. Our simulations validate our simplified formulas. We further examine the statistical errors of the plasma bulk parameters for several shapes of the plasma velocity distribution function.

]]>Entropy doi: 10.3390/e22050540

Authors: Qiaohong Hao Lijing Ma Mateu Sbert Miquel Feixas Jiawan Zhang

This paper uses quantitative eye tracking indicators to analyze the relationship between images of paintings and human viewing. First, we build the eye tracking fixation sequences through areas of interest (AOIs) into an information channel, the gaze channel. Although this channel can be interpreted as a generalization of a first-order Markov chain, we show that the gaze channel is fully independent of this interpretation, and stands even when first-order Markov chain modeling would no longer fit. The entropy of the equilibrium distribution and the conditional entropy of a Markov chain are extended with additional information-theoretic measures, such as joint entropy, mutual information, and conditional entropy of each area of interest. Then, the gaze information channel is applied to analyze a subset of Van Gogh paintings. Van Gogh artworks, classified by art critics into several periods, have been studied under computational aesthetics measures, which include the use of Kolmogorov complexity and permutation entropy. The gaze information channel paradigm allows the information-theoretic measures to analyze both individual gaze behavior and clustered behavior from observers and paintings. Finally, we show that there is a clear correlation between the gaze information channel quantities that come from direct human observation, and the computational aesthetics measures that do not rely on any human observation at all.

]]>Entropy doi: 10.3390/e22050539

Authors: Jin Yi Shiqiang Zhang Yueqi Cao Erchuan Zhang Huafei Sun

Shape registration, finding the correct alignment of two sets of data, plays a significant role in computer vision such as objection recognition and image analysis. The iterative closest point (ICP) algorithm is one of well known and widely used algorithms in this area. The main purpose of this paper is to incorporate ICP with the fast convergent extended Hamiltonian learning (EHL), so called EHL-ICP algorithm, to perform planar and spatial rigid shape registration. By treating the registration error as the potential for the extended Hamiltonian system, the rigid shape registration is modelled as an optimization problem on the special Euclidean group S E ( n ) ( n = 2 , 3 ) . Our method is robust to initial values and parameters. Compared with some state-of-art methods, our approach shows better efficiency and accuracy by simulation experiments.

]]>Entropy doi: 10.3390/e22050537

Authors: Ian D. Jordan Il Memming Park

Brain dynamics can exhibit narrow-band nonlinear oscillations and multistability. For a subset of disorders of consciousness and motor control, we hypothesized that some symptoms originate from the inability to spontaneously transition from one attractor to another. Using external perturbations, such as electrical pulses delivered by deep brain stimulation devices, it may be possible to induce such transition out of the pathological attractors. However, the induction of transition may be non-trivial, rendering the current open-loop stimulation strategies insufficient. In order to develop next-generation neural stimulators that can intelligently learn to induce attractor transitions, we require a platform to test the efficacy of such systems. To this end, we designed an analog circuit as a model for the multistable brain dynamics. The circuit spontaneously oscillates stably on two periods as an instantiation of a 3-dimensional continuous-time gated recurrent neural network. To discourage simple perturbation strategies, such as constant or random stimulation patterns from easily inducing transition between the stable limit cycles, we designed a state-dependent nonlinear circuit interface for external perturbation. We demonstrate the existence of nontrivial solutions to the transition problem in our circuit implementation.

]]>Entropy doi: 10.3390/e22050538

Authors: Sangchul Oh Abdelkader Baggag Hyunchul Nha

A restricted Boltzmann machine is a generative probabilistic graphic network. A probability of finding the network in a certain configuration is given by the Boltzmann distribution. Given training data, its learning is done by optimizing the parameters of the energy function of the network. In this paper, we analyze the training process of the restricted Boltzmann machine in the context of statistical physics. As an illustration, for small size bar-and-stripe patterns, we calculate thermodynamic quantities such as entropy, free energy, and internal energy as a function of the training epoch. We demonstrate the growth of the correlation between the visible and hidden layers via the subadditivity of entropies as the training proceeds. Using the Monte-Carlo simulation of trajectories of the visible and hidden vectors in the configuration space, we also calculate the distribution of the work done on the restricted Boltzmann machine by switching the parameters of the energy function. We discuss the Jarzynski equality which connects the path average of the exponential function of the work and the difference in free energies before and after training.

]]>Entropy doi: 10.3390/e22050536

Authors: Thomas Parr

In recent years, the &ldquo;planning as inference&rdquo; paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about &ldquo;how I am going to act&rdquo;. This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system&mdash;which include pathological insults or optogenetic manipulations&mdash;to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to).

]]>Entropy doi: 10.3390/e22050535

Authors: Bowei Shan Yong Fang

This paper proposes a deep convolutional neural network model with encoder-decoder architecture to extract road network from satellite images. We employ ResNet-18 and Atrous Spatial Pyramid Pooling technique to trade off between the extraction precision and running time. A modified cross entropy loss function is proposed to train our deep model. A PointRend algorithm is used to recover a smooth, clear and sharp road boundary. The augmentated DeepGlobe dataset is used to train our deep model and the asynchronous training method is applied to accelerate the training process. Five salellite images covering Xiaomu village are taken as input to evaluate our model. The proposed E-Road model has fewer number of parameters and shorter training time. The experiments show E-Road outperforms other state-of-the-art deep models with 5.84% to 59.09% improvement, and can give the accurate predictions for the images with complex environment.

]]>Entropy doi: 10.3390/e22050534

Authors: Manuel De la Sen Raul Nistal Asier Ibeas Aitor J. Garrido

This paper studies the representation of a general epidemic model by means of a first-order differential equation with a time-varying log-normal type coefficient. Then the generalization of the first-order differential system to epidemic models with more subpopulations is focused on by introducing the inter-subpopulations dynamics couplings and the control interventions information through the mentioned time-varying coefficient which drives the basic differential equation model. It is considered a relevant tool the control intervention of the infection along its transient to fight more efficiently against a potential initial exploding transmission. The study is based on the fact that the disease-free and endemic equilibrium points and their stability properties depend on the concrete parameterization while they admit a certain design monitoring by the choice of the control and treatment gains and the use of feedback information in the corresponding control interventions. Therefore, special attention is paid to the evolution transients of the infection curve, rather than to the equilibrium points, in terms of the time instants of its first relative maximum towards its previous inflection time instant. Such relevant time instants are evaluated via the calculation of an &ldquo;ad hoc&rdquo; Shannon&rsquo;s entropy. Analytical and numerical examples are included in the study in order to evaluate the study and its conclusions.

]]>Entropy doi: 10.3390/e22050533

Authors: Qin Zhao Chenguang Hou Changjian Liu Peng Zhang Ruifeng Xu

Quantum-inspired language models have been introduced to Information Retrieval due to their transparency and interpretability. While exciting progresses have been made, current studies mainly investigate the relationship between density matrices of difference sentence subspaces of a semantic Hilbert space. The Hilbert space as a whole which has a unique density matrix is lack of exploration. In this paper, we propose a novel Quantum Expectation Value based Language Model (QEV-LM). A unique shared density matrix is constructed for the Semantic Hilbert Space. Words and sentences are viewed as different observables in this quantum model. Under this background, a matching score describing the similarity between a question-answer pair is naturally explained as the quantum expectation value of a joint question-answer observable. In addition to the theoretical soundness, experiment results on the TREC-QA and WIKIQA datasets demonstrate the computational efficiency of our proposed model with excellent performance and low time consumption.

]]>Entropy doi: 10.3390/e22050532

Authors: Vitonofrio Crismale Francesco Fidaleo Maria Elena Griseta

In order to manage spreadability for quantum stochastic processes, we study in detail the structure of the involved monoids acting on the index-set of all integers Z , that is that generated by left and right hand-side partial shifts, the monoid of all strictly increasing maps whose range has finite complement, and finally the collection of all strictly increasing maps of Z . We show that such three monoids are strictly ordered, and the second-named one is the semidirect product between the first and the action of Z generated by the one-step shift. Even if the definition of a spreadable stochastic process is provided in terms of the invariance of the finite joint distributions under the natural action of the last monoid on the indices, we see that spreadability can be directly stated in terms of invariance with respect to the action of the first monoid. Concerning the stochastic processes involving the concrete boolean C &lowast; -algebra generated by the annihilators acting on the boolean Fock space (i.e., the concrete C &lowast; -algebra satisfying the boolean commutation relations), we study their spreadability directly in terms of the invariance under the monoid generated by all strictly increasing maps whose range has finite complement because, for this case, such an investigation appears more direct and manageable. Finally, we present the version of the Ryll&ndash;Nardzewski theorem for the boolean case, establishing that spreadable, exchangeable and stationary stochastic processes coincide, and describing their common structure.

]]>Entropy doi: 10.3390/e22050531

Authors: Lee Guo Ravikumar Tolkacheva

Paroxysmal atrial fibrillation (Paro. AF) is challenging to identify at the right moment. This disease is often undiagnosed using currently existing methods. Nonlinear analysis is gaining importance due to its capability to provide more insight into complex heart dynamics. The aim of this study is to use several recently developed nonlinear techniques to discriminate persistent AF (Pers. AF) from normal sinus rhythm (NSR), and more importantly, Paro. AF from NSR, using short-term single-lead electrocardiogram (ECG) signals. Specifically, we adapted and modified the time-delayed embedding method to minimize incorrect embedding parameter selection and further support to reconstruct proper phase plots of NSR and AF heart dynamics, from MIT-BIH databases. We also examine information-based methods, such as multiscale entropy (MSE) and kurtosis (Kt) for the same purposes. Our results demonstrate that embedding parameter time delay (&tau;), as well as MSE and Kt values can be successfully used to discriminate between Pers. AF and NSR. Moreover, we demonstrate that &tau; and Kt can successfully discriminate Paro. AF from NSR. Our results suggest that nonlinear time-delayed embedding method and information-based methods provide robust discriminating features to distinguish both Pers. AF and Paro. AF from NSR, thus offering effective treatment before suffering chaotic Pers. AF.

]]>Entropy doi: 10.3390/e22050530

Authors: Thomas B. Moeslund Sergio Escalera Gholamreza Anbarjafari Kamal Nasrollahi Jun Wan

Human behaviour analysis has introduced several challenges in various fields, such as applied information theory, affective computing, robotics, biometrics and pattern recognition [...]

]]>Entropy doi: 10.3390/e22050529

Authors: Susanna Rampichini Taian Martins Vieira Paolo Castiglioni Giampiero Merati

The surface electromyography (sEMG) records the electrical activity of muscle fibers during contraction: one of its uses is to assess changes taking place within muscles in the course of a fatiguing contraction to provide insights into our understanding of muscle fatigue in training protocols and rehabilitation medicine. Until recently, these myoelectric manifestations of muscle fatigue (MMF) have been assessed essentially by linear sEMG analyses. However, sEMG shows a complex behavior, due to many concurrent factors. Therefore, in the last years, complexity-based methods have been tentatively applied to the sEMG signal to better individuate the MMF onset during sustained contractions. In this review, after describing concisely the traditional linear methods employed to assess MMF we present the complexity methods used for sEMG analysis based on an extensive literature search. We show that some of these indices, like those derived from recurrence plots, from entropy or fractal analysis, can detect MMF efficiently. However, we also show that more work remains to be done to compare the complexity indices in terms of reliability and sensibility; to optimize the choice of embedding dimension, time delay and threshold distance in reconstructing the phase space; and to elucidate the relationship between complexity estimators and the physiologic phenomena underlying the onset of MMF in exercising muscles.

]]>Entropy doi: 10.3390/e22050528

Authors: Gilberto Corso Gabriel M. F. Ferreira Thomas M. Lewinsohn

Entropy-based indices are long-established measures of biological diversity, nowadays used to gauge partitioning of diversity at different spatial scales. Here, we tackle the measurement of diversity of interactions among two sets of organisms, such as plants and their pollinators. Actual interactions in ecological communities are depicted as bipartite networks or interaction matrices. Recent studies concentrate on distinctive structural patterns, such as nestedness or modularity, found in different modes of interaction. By contrast, we investigate mutual information as a general measure of structure in interactive networks. Mutual information (MI) measures the degree of reciprocal matching or specialization between interacting organisms. To ascertain its usefulness as a general measure, we explore (a) analytical solutions for different models; (b) the response of MI to network parameters, especially size and occupancy; (c) MI in nested, modular, and compound topologies. MI varies with fundamental matrix parameters: dimension and occupancy, for which it can be adjusted or normalized. Apparent differences among topologies are contingent on dimensions and occupancy, rather than on topological patterns themselves. As a general measure of interaction structure, MI is applicable to conceptually and empirically fruitful analyses, such as comparing similar ecological networks along geographical gradients or among interaction modalities in mutualistic or antagonistic networks.

]]>Entropy doi: 10.3390/e22050527

Authors: Binglin Wang Xiaojun Duan Liang Yan Juan Deng Jiangtao Chen

The leader&ndash;follower structure is widely used in unmanned aerial vehicle formation. This paper adopts the proportional-integral-derivative (PID) and the linear quadratic regulator controllers to construct the leader&ndash;follower formation. Tuning the PID controllers is generally empirical; hence, various surrogate models have been introduced to identify more refined parameters with relatively lower cost. However, the construction of surrogate models faces the problem that the singular points may affect the accuracy, such that the global surrogate models may be invalid. Thus, to tune controllers quickly and accurately, the regional surrogate model technique (RSMT), based on analyzing the regional information entropy, is proposed. The proposed RSMT cooperates only with the successful samples to mitigate the effect of singular points along with a classifier screening failed samples. Implementing the RSMT with various kinds of surrogate models, this study evaluates the Pareto fronts of the original simulation model and the RSMT to compare their effectiveness. The results show that the RSMT can accurately reconstruct the simulation model. Compared with the global surrogate models, the RSMT reduces the run time of tuning PID controllers by one order of magnitude, and it improves the accuracy of surrogate models by dozens of orders of magnitude.

]]>Entropy doi: 10.3390/e22050526

Authors: Gautam Aishwarya Mokshay Madiman

The analogues of Arimoto&rsquo;s definition of conditional R&eacute;nyi entropy and R&eacute;nyi mutual information are explored for abstract alphabets. These quantities, although dependent on the reference measure, have some useful properties similar to those known in the discrete setting. In addition to laying out some such basic properties and the relations to R&eacute;nyi divergences, the relationships between the families of mutual informations defined by Sibson, Augustin-Csisz&aacute;r, and Lapidoth-Pfister, as well as the corresponding capacities, are explored.

]]>Entropy doi: 10.3390/e22050524

Authors: Maria Masoliver Cristina Masoller

We study how sensory neurons detect and transmit a weak external stimulus. We use the FitzHugh&ndash;Nagumo model to simulate the neuronal activity. We consider a sub-threshold stimulus, i.e., the stimulus is below the threshold needed for triggering action potentials (spikes). However, in the presence of noise the neuron that perceives the stimulus fires a sequence of action potentials (a spike train) that carries the stimulus&rsquo; information. To yield light on how the stimulus&rsquo; information can be encoded and transmitted, we consider the simplest case of two coupled neurons, such that one neuron (referred to as neuron 1) perceives a subthreshold periodic signal but the second neuron (neuron 2) does not perceive the signal. We show that, for appropriate coupling and noise strengths, both neurons fire spike trains that have symbolic patterns (defined by the temporal structure of the inter-spike intervals), whose frequencies of occurrence depend on the signal&rsquo;s amplitude and period, and are similar for both neurons. In this way, the signal information encoded in the spike train of neuron 1 propagates to the spike train of neuron 2. Our results suggest that sensory neurons can exploit the presence of neural noise to fire spike trains where the information of a subthreshold stimulus is encoded in over expressed and/or in less expressed symbolic patterns.

]]>Entropy doi: 10.3390/e22050525

Authors: Gernot Schaller Julian Ablaßmayer

We study the coarse-graining approach to derive a generator for the evolution of an open quantum system over a finite time interval. The approach does not require a secular approximation but nevertheless generally leads to a Lindblad&ndash;Gorini&ndash;Kossakowski&ndash;Sudarshan generator. By combining the formalism with full counting statistics, we can demonstrate a consistent thermodynamic framework, once the switching work required for the coupling and decoupling with the reservoir is included. Particularly, we can write the second law in standard form, with the only difference that heat currents must be defined with respect to the reservoir. We exemplify our findings with simple but pedagogical examples.

]]>Entropy doi: 10.3390/e22050523

Authors: Stefano Marmani Valerio Ficcadenti Parmjit Kaur Gurjeet Dhesi

In Italy, the elections occur often, indeed almost every year the citizens are involved in a democratic choice for deciding leaders of different administrative entities. Sometimes the citizens are called to vote for filling more than one office in more than one administrative body. This phenomenon has occurred 35 times after 1948; it creates the peculiar condition of having the same sample of people expressing decisions on political bases at the same time. Therefore, the Italian contemporaneous ballots constitute the occasion to measure coherence and chaos in the way of expressing political opinion. In this paper, we address all the Italian elections that occurred between 1948 and 2018. We collect the number of votes per party at each administrative level and we treat each election as a manifestation of a complex system. Then, we use the Shannon entropy and the Gini Index to study the degree of disorder manifested during different types of elections at the municipality level. A particular focus is devoted to the contemporaneous elections. Such cases implicate different disorder dynamics in the contemporaneous ballots, when different administrative level are involved. Furthermore, some features that characterize different entropic regimes have emerged.

]]>Entropy doi: 10.3390/e22050522

Authors: Roni Bhowmik Shouyang Wang

In the field of business research method, a literature review is more relevant than ever. Even though there has been lack of integrity and inflexibility in traditional literature reviews with questions being raised about the quality and trustworthiness of these types of reviews. This research provides a literature review using a systematic database to examine and cross-reference snowballing. In this paper, previous studies featuring a generalized autoregressive conditional heteroskedastic (GARCH) family-based model stock market return and volatility have also been reviewed. The stock market plays a pivotal role in today’s world economic activities, named a “barometer” and “alarm” for economic and financial activities in a country or region. In order to prevent uncertainty and risk in the stock market, it is particularly important to measure effectively the volatility of stock index returns. However, the main purpose of this review is to examine effective GARCH models recommended for performing market returns and volatilities analysis. The secondary purpose of this review study is to conduct a content analysis of return and volatility literature reviews over a period of 12 years (2008–2019) and in 50 different papers. The study found that there has been a significant change in research work within the past 10 years and most of researchers have worked for developing stock markets.

]]>Entropy doi: 10.3390/e22050521

Authors: Elena R. Loubenets Christian Käding

Optimal realizations of quantum technology tasks lead to the necessity of a detailed analytical study of the behavior of a d-level quantum system (qudit) under a time-dependent Hamiltonian. In the present article, we introduce a new general formalism describing the unitary evolution of a qudit ( d &ge; 2 ) in terms of the Bloch-like vector space and specify how, in a general case, this formalism is related to finding time-dependent parameters in the exponential representation of the evolution operator under an arbitrary time-dependent Hamiltonian. Applying this new general formalism to a qubit case ( d = 2 ) , we specify the unitary evolution of a qubit via the evolution of a unit vector in R 4 , and this allows us to derive the precise analytical expression of the qubit unitary evolution operator for a wide class of nonstationary Hamiltonians. This new analytical expression includes the qubit solutions known in the literature only as particular cases.

]]>Entropy doi: 10.3390/e22050520

Authors: Jinle Xiong Xueyu Liang Lina Zhao Benny Lo Jianqing Li Chengyu Liu

Due to the wide inter- and intra-individual variability, short-term heart rate variability (HRV) analysis (usually 5 min) might lead to inaccuracy in detecting heart failure. Therefore, RR interval segmentation, which can reflect the individual heart condition, has been a key research challenge for accurate detection of heart failure. Previous studies mainly focus on analyzing the entire 24-h ECG recordings from all individuals in the database which often led to poor detection rate. In this study, we propose a set of data refinement procedures, which can automatically extract heart failure segments and yield better detection of heart failure. The procedures roughly contain three steps: (1) select fast heart rate sequences, (2) apply dynamic time warping (DTW) measure to filter out dissimilar segments, and (3) pick out individuals with large numbers of segments preserved. A physical threshold-based Sample Entropy (SampEn) was applied to distinguish congestive heart failure (CHF) subjects from normal sinus rhythm (NSR) ones, and results using the traditional threshold were also discussed. Experiment on the PhysioNet/MIT RR Interval Databases showed that in SampEn analysis (embedding dimension m = 1, tolerance threshold r = 12 ms and time series length N = 300), the accuracy value after data refinement has increased to 90.46% from 75.07%. Meanwhile, for the proposed procedures, the area under receiver operating characteristic curve (AUC) value has reached 95.73%, which outperforms the original method (i.e., without applying the proposed data refinement procedures) with AUC of 76.83%. The results have shown that our proposed data refinement procedures can significantly improve the accuracy in heart failure detection.

]]>Entropy doi: 10.3390/e22050519

Authors: Saad Alqithami Rahmat Budiarto Musaad Alzahrani Henry Hexmoor

Due to the complexity of an open multi-agent system, agents&rsquo; interactions are instantiated spontaneously, resulting in beneficent collaborations with one another for mutual actions that are beyond one&rsquo;s current capabilities. Repeated patterns of interactions shape a feature of their organizational structure when those agents self-organize themselves for a long-term objective. This paper, therefore, aims to provide an understanding of social capital in organizations that are open membership multi-agent systems with an emphasis in our formulation on the dynamic network of social interactions that, in part, elucidate evolving structures and impromptu topologies of networks. We model an open source project as an organizational network and provide definitions and formulations to correlate the proposed mechanism of social capital with the achievement of an organizational charter, for example, optimized productivity. To empirically evaluate our model, we conducted a case study of an open source software project to demonstrate how social capital can be created and measured within this type of organization. The results indicate that the values of social capital are positively proportional towards optimizing agents&rsquo; productivity into successful completion of the project.

]]>Entropy doi: 10.3390/e22050518

Authors: Carlos Dafonte Alejandra Rodríguez Minia Manteiga Ángel Gómez Bernardino Arcay

This paper analyzes and compares the sensitivity and suitability of several artificial intelligence techniques applied to the Morgan&ndash;Keenan (MK) system for the classification of stars. The MK system is based on a sequence of spectral prototypes that allows classifying stars according to their effective temperature and luminosity through the study of their optical stellar spectra. Here, we include the method description and the results achieved by the different intelligent models developed thus far in our ongoing stellar classification project: fuzzy knowledge-based systems, backpropagation, radial basis function (RBF) and Kohonen artificial neural networks. Since one of today&rsquo;s major challenges in this area of astrophysics is the exploitation of large terrestrial and space databases, we propose a final hybrid system that integrates the best intelligent techniques, automatically collects the most important spectral features, and determines the spectral type and luminosity level of the stars according to the MK standard system. This hybrid approach truly emulates the behavior of human experts in this area, resulting in higher success rates than any of the individual implemented techniques. In the final classification system, the most suitable methods are selected for each individual spectrum, which implies a remarkable contribution to the automatic classification process.

]]>Entropy doi: 10.3390/e22050517

Authors: Ali M. Hasan Mohammed M. AL-Jawad Hamid A. Jalab Hadil Shaiba Rabha W. Ibrahim Ala’a R. AL-Shamasneh

Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists&rsquo; efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

]]>Entropy doi: 10.3390/e22050516

Authors: Karl J. Friston Wanja Wiese J. Allan Hobson

This essay addresses Cartesian duality and how its implicit dialectic might be repaired using physics and information theory. Our agenda is to describe a key distinction in the physical sciences that may provide a foundation for the distinction between mind and matter, and between sentient and intentional systems. From this perspective, it becomes tenable to talk about the physics of sentience and &lsquo;forces&rsquo; that underwrite our beliefs (in the sense of probability distributions represented by our internal states), which may ground our mental states and consciousness. We will refer to this view as Markovian monism, which entails two claims: (1) fundamentally, there is only one type of thing and only one type of irreducible property (hence monism). (2) All systems possessing a Markov blanket have properties that are relevant for understanding the mind and consciousness: if such systems have mental properties, then they have them partly by virtue of possessing a Markov blanket (hence Markovian). Markovian monism rests upon the information geometry of random dynamic systems. In brief, the information geometry induced in any system&mdash;whose internal states can be distinguished from external states&mdash;must acquire a dual aspect. This dual aspect concerns the (intrinsic) information geometry of the probabilistic evolution of internal states and a separate (extrinsic) information geometry of probabilistic beliefs about external states that are parameterised by internal states. We call these intrinsic (i.e., mechanical, or state-based) and extrinsic (i.e., Markovian, or belief-based) information geometries, respectively. Although these mathematical notions may sound complicated, they are fairly straightforward to handle, and may offer a means through which to frame the origins of consciousness.

]]>Entropy doi: 10.3390/e22050515

Authors: Mathieu Beau Adolfo del Campo

We consider the nonadiabatic energy fluctuations of a many-body system in a time-dependent harmonic trap. In the presence of scale-invariance, the dynamics becomes self-similar and the nondiabatic energy fluctuations can be found in terms of the initial expectation values of the second moments of the Hamiltonian, square position, and squeezing operators. Nonadiabatic features are expressed in terms of the scaling factor governing the size of the atomic cloud, which can be extracted from time-of-flight images. We apply this exact relation to a number of examples: the single-particle harmonic oscillator, the one-dimensional Calogero-Sutherland model, describing bosons with inverse-square interactions that includes the non-interacting Bose gas and the Tonks-Girdardeau gas as limiting cases, and the unitary Fermi gas. We illustrate these results for various expansion protocols involving sudden quenches of the trap frequency, linear ramps and shortcuts to adiabaticity. Our results pave the way to the experimental study of nonadiabatic energy fluctuations in driven quantum fluids.

]]>Entropy doi: 10.3390/e22050514

Authors: Chetan Prakash Chris Fields Donald D. Hoffman Robert Prentner Manish Singh

A theory of consciousness, whatever else it may do, must address the structure of experience. Our perceptual experiences are richly structured. Simply seeing a red apple, swaying between green leaves on a stout tree, involves symmetries, geometries, orders, topologies, and algebras of events. Are these structures also present in the world, fully independent of their observation? Perceptual theorists of many persuasions&mdash;from computational to radical embodied&mdash;say yes: perception veridically presents to observers structures that exist in an observer-independent world; and it does so because natural selection shapes perceptual systems to be increasingly veridical. Here we study four structures: total orders, permutation groups, cyclic groups, and measurable spaces. We ask whether the payoff functions that drive evolution by natural selection are homomorphisms of these structures. We prove, in each case, that generically the answer is no: as the number of world states and payoff values go to infinity, the probability that a payoff function is a homomorphism goes to zero. We conclude that natural selection almost surely shapes perceptions of these structures to be non-veridical. This is consistent with the interface theory of perception, which claims that natural selection shapes perceptual systems not to provide veridical perceptions, but to serve as species-specific interfaces that guide adaptive behavior. Our results present a constraint for any theory of consciousness which assumes that structure in perceptual experience is shaped by natural selection.

]]>Entropy doi: 10.3390/e22050513

Authors: Ang Li Luis Pericchi Kun Wang

There is not much literature on objective Bayesian analysis for binary classification problems, especially for intrinsic prior related methods. On the other hand, variational inference methods have been employed to solve classification problems using probit regression and logistic regression with normal priors. In this article, we propose to apply the variational approximation on probit regression models with intrinsic prior. We review the mean-field variational method and the procedure of developing intrinsic prior for the probit regression model. We then present our work on implementing the variational Bayesian probit regression model using intrinsic prior. Publicly available data from the world&rsquo;s largest peer-to-peer lending platform, LendingClub, will be used to illustrate how model output uncertainties are addressed through the framework we proposed. With LendingClub data, the target variable is the final status of a loan, either charged-off or fully paid. Investors may very well be interested in how predictive features like FICO, amount financed, income, etc. may affect the final loan status.

]]>Entropy doi: 10.3390/e22050512

Authors: Barouch Matzliach Irad Ben-Gal Evgeny Kagan

The paper considers the detection of multiple targets by a group of mobile robots that perform under uncertainty. The agents are equipped with sensors with positive and non-negligible probabilities of detecting the targets at different distances. The goal is to define the trajectories of the agents that can lead to the detection of the targets in minimal time. The suggested solution follows the classical Koopman&rsquo;s approach applied to an occupancy grid, while the decision-making and control schemes are conducted based on information-theoretic criteria. Sensor fusion in each agent and over the agents is implemented using a general Bayesian scheme. The presented procedures follow the expected information gain approach utilizing the &ldquo;center of view&rdquo; and the &ldquo;center of gravity&rdquo; algorithms. These methods are compared with a simulated learning method. The activity of the procedures is analyzed using numerical simulations.

]]>Entropy doi: 10.3390/e22050511

Authors: Lizheng Pan Zeming Yin Shigang She Aiguo Song

Emotion recognition realizing human inner perception has a very important application prospect in human-computer interaction. In order to improve the accuracy of emotion recognition, a novel method combining fused nonlinear features and team-collaboration identification strategy was proposed for emotion recognition using physiological signals. Four nonlinear features, namely approximate entropy (ApEn), sample entropy (SaEn), fuzzy entropy (FuEn) and wavelet packet entropy (WpEn) are employed to reflect emotional states deeply with each type of physiological signal. Then the features of different physiological signals are fused to represent the emotional states from multiple perspectives. Each classifier has its own advantages and disadvantages. In order to make full use of the advantages of other classifiers and avoid the limitation of single classifier, the team-collaboration model is built and the team-collaboration decision-making mechanism is designed according to the proposed team-collaboration identification strategy which is based on the fusion of support vector machine (SVM), decision tree (DT) and extreme learning machine (ELM). Through analysis, SVM is selected as the main classifier with DT and ELM as auxiliary classifiers. According to the designed decision-making mechanism, the proposed team-collaboration identification strategy can effectively employ different classification methods to make decision based on the characteristics of the samples through SVM classification. For samples which are easy to be identified by SVM, SVM directly determines the identification results, whereas SVM-DT-ELM collaboratively determines the identification results, which can effectively utilize the characteristics of each classifier and improve the classification accuracy. The effectiveness and universality of the proposed method are verified by Augsburg database and database for emotion analysis using physiological (DEAP) signals. The experimental results uniformly indicated that the proposed method combining fused nonlinear features and team-collaboration identification strategy presents better performance than the existing methods.

]]>Entropy doi: 10.3390/e22050510

Authors: Longlong Liu Di Ma Ahmad Taher Azar Quanmin Zhu

In this paper, a gradient descent algorithm is proposed for the parameter estimation of multi-input and multi-output (MIMO) total non-linear dynamic models. Firstly, the MIMO total non-linear model is mapped to a non-completely connected feedforward neural network, that is, the parameters of the total non-linear model are mapped to the connection weights of the neural network. Then, based on the minimization of network error, a weight-updating algorithm, that is, an estimation algorithm of model parameters, is proposed with the convergence conditions of a non-completely connected feedforward network. In further determining the variables of the model set, a method of model structure detection is proposed for selecting a group of important items from the whole variable candidate set. In order to verify the usefulness of the parameter identification process, we provide a virtual bench test example for the numerical analysis and user-friendly instructions for potential applications.

]]>Entropy doi: 10.3390/e22050509

Authors: Rafał Rak Ewa Rak

Many networks generated by nature have two generic properties: they are formed in the process of preferential attachment and they are scale-free. Considering these features, by interfering with mechanism of the preferential attachment, we propose a generalisation of the Barab&aacute;si&ndash;Albert model&mdash;the &rsquo;Fractional Preferential Attachment&rsquo; (FPA) scale-free network model&mdash;that generates networks with time-independent degree distributions p ( k ) &sim; k &minus; &gamma; with degree exponent 2 &lt; &gamma; &le; 3 (where &gamma; = 3 corresponds to the typical value of the BA model). In the FPA model, the element controlling the network properties is the f parameter, where f &isin; ( 0 , 1 &rang; . Depending on the different values of f parameter, we study the statistical properties of the numerically generated networks. We investigate the topological properties of FPA networks such as degree distribution, degree correlation (network assortativity), clustering coefficient, average node degree, network diameter, average shortest path length and features of fractality. We compare the obtained values with the results for various synthetic and real-world networks. It is found that, depending on f, the FPA model generates networks with parameters similar to the real-world networks. Furthermore, it is shown that f parameter has a significant impact on, among others, degree distribution and degree correlation of generated networks. Therefore, the FPA scale-free network model can be an interesting alternative to existing network models. In addition, it turns out that, regardless of the value of f, FPA networks are not fractal.

]]>Entropy doi: 10.3390/e22050508

Authors: Gao Wu Zhang

With increasing complexity of electronic warfare environments, smart jammers are beginning to play an important role. This study investigates a method of power minimization-based jamming waveform design in the presence of multiple targets, in which the performance of a radar system can be degraded according to the jammers&rsquo; different tasks. By establishing an optimization model, the power consumption of the designed jamming spectrum is minimized. The jamming spectrum with power control is constrained by a specified signal-to-interference-plus-noise ratio (SINR) or mutual information (MI) requirement. Considering that precise characterizations of the radar-transmitted spectrum are rare in practice, a single-robust jamming waveform design method is proposed. Furthermore, recognizing that the ground jammer is not integrated with the target, a double-robust jamming waveform design method is studied. Simulation results show that power minimization-based single-robust jamming spectra can maximize the power-saving performance of smart jammers in the local worst-case scenario. Moreover, double-robust jamming spectra can minimize the power consumption in the global worst-case scenario and provide useful guidance for the waveform design of ground jammers.

]]>Entropy doi: 10.3390/e22050507

Authors: Róbert Kovács Antonio M. Scarfone Sumiyoshi Abe

The present Special Issue, &lsquo;Entropy and Non-Equilibrium Statistical Mechanics&rsquo;, consists of seven original research papers [...]

]]>Entropy doi: 10.3390/e22050506

Authors: Andrzej Biś Agnieszka Namiecińska

The purpose of this paper is to elucidate the interrelations between three essentially different concepts: solenoids, topological entropy, and Hausdorff dimension. For this purpose, we describe the dynamics of a solenoid by topological entropy-like quantities and investigate the relations between them. For L-Lipschitz solenoids and locally &lambda; &mdash; expanding solenoids, we show that the topological entropy and fractal dimensions are closely related. For a locally &lambda; &mdash; expanding solenoid, we prove that its topological entropy is lower estimated by the Hausdorff dimension of X multiplied by the logarithm of &lambda; .

]]>Entropy doi: 10.3390/e22050505

Authors: Víctor Martínez-Cagigal Eduardo Santamaría-Vázquez Roberto Hornero

Figure 5 of the original paper contains errors [...]

]]>Entropy doi: 10.3390/e22050504

Authors: Ce Wang Caishi Wang

As a discrete-time quantum walk model on the one-dimensional integer lattice Z , the quantum walk recently constructed by Wang and Ye [Caishi Wang and Xiaojuan Ye, Quantum walk in terms of quantum Bernoulli noises, Quantum Information Processing 15 (2016), 1897&ndash;1908] exhibits quite different features. In this paper, we extend this walk to a higher dimensional case. More precisely, for a general positive integer d &ge; 2 , by using quantum Bernoulli noises we introduce a model of discrete-time quantum walk on the d-dimensional integer lattice Z d , which we call the d-dimensional QBN walk. The d-dimensional QBN walk shares the same coin space with the quantum walk constructed by Wang and Ye, although it is a higher dimensional extension of the latter. Moreover we prove that, for a range of choices of its initial state, the d-dimensional QBN walk has a limit probability distribution of d-dimensional standard Gauss type, which is in sharp contrast with the case of the usual higher dimensional quantum walks. Some other results are also obtained.

]]>Entropy doi: 10.3390/e22050503

Authors: Ahmad Faraj Hassan Jaber Khaled Chahine Jalal Faraj Mohamad Ramadan Hicham El Hage Mahmoud Khaled

In this manuscript, an innovative concept of producing power from a thermoelectric generator (TEG) is evaluated. This concept takes advantage of using the exhaust airflow of all-air heating, ventilating, and air-conditioning (HVAC) systems, and sun irradiation. For the first step, a parametric analysis of power generation from TEGs for different practical configurations is performed. Based on the results of the parametric analysis, recommendations associated with practical applications are presented. Therefore, a one-dimensional steady-state solution for the heat diffusion equation is considered with various boundary conditions (representing applied configurations). It is revealed that the most promising configuration corresponds to the TEG module exposed to a hot fluid at one face and a cold fluid at the other face. Then, based on the parametric analysis, the innovative concept is recognized and analyzed using appropriate thermal modeling. It is shown that for solar radiation of 2000 W/m2 and a space cooling load of 20 kW, a 40 &times; 40 cm2 flat plate is capable of generating 3.8 W of electrical power. Finally, an economic study shows that this system saves about $6 monthly with a 3-year payback period at 2000 W/m2 solar radiation. Environmentally, the system is also capable of reducing about 1 ton of CO2 emissions yearly.

]]>Entropy doi: 10.3390/e22050502

Authors: Víctor Romero-Rochín

Based on the foundations of thermodynamics and the equilibrium conditions for the coexistence of two phases in a magnetic Ising-like system, we show, first, that there is a critical point where the isothermal susceptibility diverges and the specific heat may remain finite, and second, that near the critical point the entropy of the system, and therefore all free energies, do obey scaling. Although we limit ourselves to such a system, we elaborate about the possibilities of finding universality, as well as the precise values of the critical exponents using thermodynamics only.

]]>Entropy doi: 10.3390/e22050501

Authors: Bozhidar Stoyanov Borislav Stoyanov

In this study, we present a medical image stego hiding scheme using a nuclear spin generator system. Detailed theoretical and experimental analysis is provided on the proposed algorithm using histogram analysis, peak signal-to-noise ratio, key space calculation, and statistical package analysis. The provided results show good performance of the brand new medical image steganographic scheme.

]]>Entropy doi: 10.3390/e22050500

Authors: Haiyan Ye Huilin Lai Demei Li Yanbiao Gan Chuandong Lin Lu Chen Aiguo Xu

Based on the framework of our previous work [H.L. Lai et al., Phys. Rev. E, 94, 023106 (2016)], we continue to study the effects of Knudsen number on two-dimensional Rayleigh&ndash;Taylor (RT) instability in compressible fluid via the discrete Boltzmann method. It is found that the Knudsen number effects strongly inhibit the RT instability but always enormously strengthen both the global hydrodynamic non-equilibrium (HNE) and thermodynamic non-equilibrium (TNE) effects. Moreover, when Knudsen number increases, the Kelvin&ndash;Helmholtz instability induced by the development of the RT instability is difficult to sufficiently develop in the later stage. Different from the traditional computational fluid dynamics, the discrete Boltzmann method further presents a wealth of non-equilibrium information. Specifically, the two-dimensional TNE quantities demonstrate that, far from the disturbance interface, the value of TNE strength is basically zero; the TNE effects are mainly concentrated on both sides of the interface, which is closely related to the gradient of macroscopic quantities. The global TNE first decreases then increases with evolution. The relevant physical mechanisms are analyzed and discussed.

]]>Entropy doi: 10.3390/e22050499

Authors: Martin Hilbert David Darmon

The machine-learning paradigm promises traders to reduce uncertainty through better predictions done by ever more complex algorithms. We ask about detectable results of both uncertainty and complexity at the aggregated market level. We analyzed almost one billion trades of eight currency pairs (2007&ndash;2017) and show that increased algorithmic trading is associated with more complex subsequences and more predictable structures in bid-ask spreads. However, algorithmic involvement is also associated with more future uncertainty, which seems contradictory, at first sight. On the micro-level, traders employ algorithms to reduce their local uncertainty by creating more complex algorithmic patterns. This entails more predictable structure and more complexity. On the macro-level, the increased overall complexity implies more combinatorial possibilities, and therefore, more uncertainty about the future. The chain rule of entropy reveals that uncertainty has been reduced when trading on the level of the fourth digit behind the dollar, while new uncertainty started to arise at the fifth digit behind the dollar (aka &lsquo;pip-trading&rsquo;). In short, our information theoretic analysis helps us to clarify that the seeming contradiction between decreased uncertainty on the micro-level and increased uncertainty on the macro-level is the result of the inherent relationship between complexity and uncertainty.

]]>Entropy doi: 10.3390/e22050498

Authors: Frédéric Barbaresco François Gay-Balmaz

In this paper, we describe and exploit a geometric framework for Gibbs probability densities and the associated concepts in statistical mechanics, which unifies several earlier works on the subject, including Souriau&rsquo;s symplectic model of statistical mechanics, its polysymplectic extension, Koszul model, and approaches developed in quantum information geometry. We emphasize the role of equivariance with respect to Lie group actions and the role of several concepts from geometric mechanics, such as momentum maps, Casimir functions, coadjoint orbits, and Lie-Poisson brackets with cocycles, as unifying structures appearing in various applications of this framework to information geometry and machine learning. For instance, we discuss the expression of the Fisher metric in presence of equivariance and we exploit the property of the entropy of the Souriau model as a Casimir function to apply a geometric model for energy preserving entropy production. We illustrate this framework with several examples including multivariate Gaussian probability densities, and the Bogoliubov-Kubo-Mori metric as a quantum version of the Fisher metric for quantum information on coadjoint orbits. We exploit this geometric setting and Lie group equivariance to present symplectic and multisymplectic variational Lie group integration schemes for some of the equations associated with Souriau symplectic and polysymplectic models, such as the Lie-Poisson equation with cocycle.

]]>Entropy doi: 10.3390/e22050497

Authors: Lamya A. Baharith Kholod M. AL-Beladi Hadeel S. Klakattawi

This article introduces the odds exponential-Pareto IV distribution, which belongs to the odds family of distributions. We studied the statistical properties of this new distribution. The odds exponential-Pareto IV distribution provided decreasing, increasing, and upside-down hazard functions. We employed the maximum likelihood method to estimate the distribution parameters. The estimators performance was assessed by conducting simulation studies. A new log location-scale regression model based on the odds exponential-Pareto IV distribution was also introduced. Parameter estimates of the proposed model were obtained using both maximum likelihood and jackknife methods for right-censored data. Real data sets were analyzed under the odds exponential-Pareto IV distribution and log odds exponential-Pareto IV regression model to show their flexibility and potentiality.

]]>Entropy doi: 10.3390/e22050496

Authors: Hyunjae Lee Eun Young Seo Hyosang Ju Sang-Hyo Kim

Neural network decoders (NNDs) for rate-compatible polar codes are studied in this paper. We consider a family of rate-compatible polar codes which are constructed from a single polar coding sequence as defined by 5G new radios. We propose a transfer learning technique for training multiple NNDs of the rate-compatible polar codes utilizing their inclusion property. The trained NND for a low rate code is taken as the initial state of NND training for the next smallest rate code. The proposed method provides quicker training as compared to separate learning of the NNDs according to numerical results. We additionally show that an underfitting problem of NND training due to low model complexity can be solved by transfer learning techniques.

]]>Entropy doi: 10.3390/e22050495

Authors: Nargis Khan Iram Riaz Muhammad Sadiq Hashmi Saed A. Musmar Sami Ullah Khan Zahra Abdelmalek Iskander Tlili

The appropriate utilization of entropy generation may provoke dipping losses in the available energy of nanofluid flow. The effects of chemical entropy generation in axisymmetric flow of Casson nanofluid between radiative stretching disks in the presence of thermal radiation, chemical reaction, and heat absorption/generation features have been mathematically modeled and simulated via interaction of slip boundary conditions. Shooting method has been employed to numerically solve dimensionless form of the governing equations, including expressions referring to entropy generation. The impacts of the physical parameters on fluid velocity components, temperature and concentration profiles, and entropy generation number are presented. Simulation results revealed that axial component of velocity decreases with variation of Casson fluid parameter. A declining variation in Bejan number was noticed with increment of Casson fluid constant. Moreover, a progressive variation in Bejan number resulted due to the impact of Prandtl number and stretching ratio constant.

]]>