Open AccessArticle
The Power Law Characteristics of Stock Price Jump Intervals: An Empirical and Computational Experimental Study
Entropy 2018, 20(4), 304; doi:10.3390/e20040304 (registering DOI) -
Abstract
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in
[...] Read more.
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in which an agent’s investment strategies, risk appetite, learning ability, adaptability, and dynamic changes are considered to create a dynamically changing environment. An analysis of these data packets from the ASM simulation indicates that, with the learning mechanism, the ASM reflects the kurtosis, fat-tailed distribution characteristics commonly observed in real markets. Data packets obtained from simulating the ASM for 5010 periods are incorporated into a regression analysis. Analysis results indicate that the JDMPL effectively characterizes the stock price jumps in the market. The results also support the hypothesis that the time interval of stock price jumps is consistent with the power law and indicate that the diversity and dynamic changes of agents’ investment strategies are the reasons for the discontinuity in the changes of stock prices. Full article
Open AccessArticle
The Conservation of Average Entropy Production Rate in a Model of Signal Transduction: Information Thermodynamics Based on the Fluctuation Theorem
Entropy 2018, 20(4), 303; doi:10.3390/e20040303 (registering DOI) -
Abstract
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined
[...] Read more.
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined using information theory. By applying the fluctuation theorem for reversible signal steps, the transition probability was described using the average entropy production rate. Specifically, when the signal event number during the cascade was maximized, the average entropy production rate was found to be conserved during the entire cascade. This approach provides a quantitative means of analyzing signal transduction and identifies an effective cascade for a signaling network. Full article
Figures

Figure 1

Open AccessArticle
Location-Aware Incentive Mechanism for Traffic Offloading in Heterogeneous Networks: A Stackelberg Game Approach
Entropy 2018, 20(4), 302; doi:10.3390/e20040302 (registering DOI) -
Abstract
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading.
[...] Read more.
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading. A location-aware incentive mechanism is therefore designed to incentivize small cells to serve macrocell users. Instead of taking the amount of resources shared as the basis of the reward division, the performance promotion brought to the macro network is taken. Meanwhile, in order to ensure the superiority of small cell users, the significance of them weighs heavier than macrocell users instead of being treated equally. The offloading problem is formulated as a Stackelberg game where the macro cell base station is the leader and small cells are followers. The Stackelberg equilibrium of the game is proved to be existing and unique. It is also proved to be the optimum of the proposed problem. Simulation and numerical results verify the effectiveness of the proposed method. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Extended Thermodynamics of Rarefied Polyatomic Gases: 15-Field Theory Incorporating Relaxation Processes of Molecular Rotation and Vibration
Entropy 2018, 20(4), 301; doi:10.3390/e20040301 (registering DOI) -
Abstract
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The
[...] Read more.
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The theory is justified, at mesoscopic level, by a generalized Boltzmann equation in which the distribution function depends on two internal variables that take into account the energy exchange among the different molecular modes of a gas, that is, translational, rotational, and vibrational modes. By adopting the generalized Bhatnagar, Gross and Krook (BGK)-type collision term, we derive explicitly the closed system of field equations with the use of the Maximum Entropy Principle (MEP). The NSF theory is derived from the RET theory as a limiting case of small relaxation times via the Maxwellian iteration. The relaxation times introduced in the theory are shown to be related to the shear and bulk viscosities and heat conductivity. Full article
Open AccessArticle
Information-Length Scaling in a Generalized One-Dimensional Lloyd’s Model
Entropy 2018, 20(4), 300; doi:10.3390/e20040300 (registering DOI) -
Abstract
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P(ϵ)1/ϵ1+α with α
[...] Read more.
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P(ϵ)1/ϵ1+α with α(0,2] ; where ϵ are the on-site random energies. Our model serves as a generalization of 1D Lloyd’s model, which corresponds to α=1 . In particular, we demonstrate that the information length β of the eigenfunctions follows the scaling law β=γx/(1+γx) , with x=ξ/L and γγ(α) . Here, ξ is the eigenfunction localization length (that we extract from the scaling of Landauer’s conductance) and L is the wire length. We also report that for α=2 the properties of the 1D Anderson model are effectively reproduced. Full article
Figures

Figure 1

Open AccessArticle
Quantum Nonlocality and Quantum Correlations in the Stern–Gerlach Experiment
Entropy 2018, 20(4), 299; doi:10.3390/e20040299 -
Abstract
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or
[...] Read more.
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or comprehended in the current literature. Hence, the main aim of this paper is to demonstrate that the SGE possesses a quantum nonlocal character that has not previously been visualized or presented before. Accordingly, to show the nonlocality into the SGE, we calculate the quantum correlations C(z,θ) by redefining the Banaszek–Wódkiewicz correlation in terms of the Wigner operator, that is C(z,θ)=Ψ|W^(z,pz)σ^(θ)|Ψ , where W^(z,pz) is the Wigner operator, σ^(θ) is the Pauli spin operator in an arbitrary direction θ and |Ψ is the quantum state given by an entangled state of the external degree of freedom and the eigenstates of the spin. We show that this correlation function for the SGE violates the Clauser–Horne–Shimony–Holt Bell inequality. Thus, this feature of the SGE might be interesting for both the teaching of quantum mechanics and to investigate the phenomenon of quantum nonlocality. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Calculation of Configurational Entropy in Complex Landscapes
Entropy 2018, 20(4), 298; doi:10.3390/e20040298 -
Abstract
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal
[...] Read more.
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal to the logarithm of the number of ways a lattice with a given dimensionality and number of classes can be arranged to produce the same total amount of edge between cells of different classes. However, that work seemed to also suggest that the feasibility of applying this method to real landscapes was limited due to intractably large numbers of possible arrangements of raster cells in large landscapes. Here I extend that work by showing that: (1) the proportion of arrangements rather than the number with a given amount of edge length provides a means to calculate unbiased relative configurational entropy, obviating the need to compute all possible configurations of a landscape lattice; (2) the edge lengths of randomized landscape mosaics are normally distributed, following the central limit theorem; and (3) given this normal distribution it is possible to fit parametric probability density functions to estimate the expected proportion of randomized configurations that have any given edge length, enabling the calculation of configurational entropy on any landscape regardless of size or number of classes. I evaluate the boundary limits (4) for this normal approximation for small landscapes with a small proportion of a minority class and show it holds under all realistic landscape conditions. I further (5) demonstrate that this relationship holds for a sample of real landscapes that vary in size, patch richness, and evenness of area in each cover type, and (6) I show that the mean and standard deviation of the normally distributed edge lengths can be predicted nearly perfectly as a function of the size, patch richness and diversity of a landscape. Finally, (7) I show that the configurational entropy of a landscape is highly related to the dimensionality of the landscape, the number of cover classes, the evenness of landscape composition across classes, and landscape heterogeneity. These advances provide a means for researchers to directly estimate the frequency distribution of all possible macrostates of any observed landscape, and then directly calculate the relative configurational entropy of the observed macrostate, and to understand the ecological meaning of different amounts of configurational entropy. These advances enable scientists to take configurational entropy from a concept to an applied tool to measure and compare the disorder of real landscapes with an objective and unbiased measure based on entropy and the second law. Full article
Figures

Figure 1

Open AccessArticle
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Entropy 2018, 20(4), 297; doi:10.3390/e20040297 -
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The
[...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Figures

Figure 1

Open AccessArticle
Image Clustering with Optimization Algorithms and Color Space
Entropy 2018, 20(4), 296; doi:10.3390/e20040296 -
Abstract
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the
[...] Read more.
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the number of thresholds. However, the determination of multiple thresholds is a typical issue. Moreover, the conventional thresholding algorithms could not be used in color image segmentation. In this study, a new color image clustering algorithm with multilevel thresholding has been presented and, it has been shown how to use the multilevel thresholding techniques for color image clustering. Thus, initially, threshold selection techniques such as the Otsu and Kapur methods were employed for each color channel separately. The objective functions of both approaches have been integrated with the forest optimization algorithm (FOA) and particle swarm optimization (PSO) algorithm. In the next stage, thresholds determined by optimization algorithms were used to divide color space into small cubes or prisms. Each sub-cube or prism created in the color space was evaluated as a cluster. As the volume of prisms affects the homogeneity of the clusters created, multiple thresholds were employed to reduce the sizes of the sub-cubes. The performance of the proposed method was tested with different images. It was observed that the results obtained were more efficient than conventional methods. Full article
Figures

Figure 1

Open AccessArticle
A Novel Algorithm to Improve Digital Chaotic Sequence Complexity through CCEMD and PE
Entropy 2018, 20(4), 295; doi:10.3390/e20040295 -
Abstract
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further,
[...] Read more.
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further, we designed a new algorithm based on complementary ensemble empirical mode decomposition (CEEMD) and permutation entropy (PE) that can effectively enhance digital chaotic sequence complexity. In addition, an image encryption experiment was performed with post-processing of the chaotic binary sequences by the new algorithm. The experimental results show good performance of the chaotic binary sequence. Full article
Figures

Figure 1a

Open AccessFeature PaperArticle
A Lenient Causal Arrow of Time?
Entropy 2018, 20(4), 294; doi:10.3390/e20040294 -
Abstract
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark
[...] Read more.
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark contrast with the many different possible locality conditions which have been studied and elaborated upon. In the present work, some retrocausal toy-models which reproduce the predictions of quantum mechanics for Bell-type correlations are reviewed. It is pointed out that a certain toy-model which is ostensibly superdeterministic—based on denying the free-variable status of some of quantum mechanics’ input parameters—actually contains within it a complete retrocausal toy-model. Occam’s razor thus indicates that the superdeterministic point of view is superfluous. A challenge is to generalize the retrocausal toy-models to a full theory—a reformulation of quantum mechanics—in which the standard causal arrow of time would be replaced by a more lenient one: an arrow of time applicable only to macroscopically-available information. In discussing such a reformulation, one finds that many of the perplexing features of quantum mechanics could arise naturally, especially in the context of stochastic theories. Full article
Figures

Figure 1

Open AccessArticle
Entropy Production on the Gravity-Driven Flow with Free Surface Down an Inclined Plane Subjected to Constant Temperature
Entropy 2018, 20(4), 293; doi:10.3390/e20040293 -
Abstract
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of
[...] Read more.
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of a falling film have an important role in the entropy production. When the layer shows an unstable evolution, the entropy production by fluid friction is much larger than that of the film with a stable flat interface. As the heat transfers actively from the free surface to the ambient air, the temperature gradient inside flowing films becomes large and the entropy generation by heat transfer increases. The contribution of fluid friction on the volumetric averaged entropy production is larger than that of heat transfer at moderate and high viscous dissipation parameters. Full article
Figures

Figure 1

Open AccessArticle
Generalized Weyl–Heisenberg Algebra, Qudit Systems and Entanglement Measure of Symmetric States via Spin Coherent States
Entropy 2018, 20(4), 292; doi:10.3390/e20040292 -
Abstract
A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl–Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and
[...] Read more.
A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl–Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and entangled states of a system of N=d1 symmetric qubit states. Using the decomposition property of Dicke states, it is shown that the separable states coincide with the Perelomov coherent states associated with the generalized Weyl–Heisenberg algebra considered in this paper. In the so-called Majorana scheme, the qudit (d-level) states are represented by N points on the Bloch sphere; roughly speaking, it can be said that a qudit (in a d-dimensional space) is describable by a N-qubit vector (in a N-dimensional space). In such a scheme, the permanent of the matrix describing the overlap between the N qubits makes it possible to measure the entanglement between the N qubits forming the qudit. This is confirmed by a Fubini–Study metric analysis. A new parameter, proportional to the permanent and called perma-concurrence, is introduced for characterizing the entanglement of a symmetric qudit arising from N qubits. For d=3 ( N=2 ), this parameter constitutes an alternative to the concurrence for two qubits. Other examples are given for d=4 and 5. A connection between Majorana stars and zeros of a Bargmmann function for qudits closes this article. Full article
Open AccessArticle
Measurement-Device Independency Analysis of Continuous-Variable Quantum Digital Signature
Entropy 2018, 20(4), 291; doi:10.3390/e20040291 -
Abstract
With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols.
[...] Read more.
With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols. Considering the importance of quantum digital signature in quantum cryptography, in this paper, we attempt to analyze the measurement-device independency of continuous-variable quantum digital signature, especially continuous-variable quantum homomorphic signature. Firstly, we calculate the upper bound of the error rate of a protocol. If it is negligible on condition that all measurement devices are untrusted, the protocol is deemed to be measurement-device-independent. Then, we simplify the calculation by using the characteristics of continuous variables and prove the measurement-device independency of the protocol according to the calculation result. In addition, the proposed analysis method can be extended to other quantum cryptographic protocols besides continuous-variable quantum homomorphic signature. Full article
Figures

Figure 1

Open AccessArticle
Optimization of CNN through Novel Training Strategy for Visual Classification Problems
Entropy 2018, 20(4), 290; doi:10.3390/e20040290 -
Abstract
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the
[...] Read more.
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the CNN. We hypothesize that the smoother and optimized the training of a CNN goes, the more efficient the end result becomes. Therefore, in this paper, we implement a modified resilient backpropagation (MRPROP) algorithm to improve the convergence and efficiency of CNN training. Particularly, a tolerant band is introduced to avoid network overtraining, which is incorporated with the global best concept for weight updating criteria to allow the training algorithm of the CNN to optimize its weights more swiftly and precisely. For comparison, we present and analyze four different training algorithms for CNN along with MRPROP, i.e., resilient backpropagation (RPROP), Levenberg–Marquardt (LM), conjugate gradient (CG), and gradient descent with momentum (GDM). Experimental results showcase the merit of the proposed approach on a public face and skin dataset. Full article
Figures

Figure 1

Open AccessArticle
Statistical Reasoning: Choosing and Checking the Ingredients, Inferences Based on a Measure of Statistical Evidence with Some Applications
Entropy 2018, 20(4), 289; doi:10.3390/e20040289 -
Abstract
The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for
[...] Read more.
The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for bias, checking for prior-data conflict and estimation and hypothesis assessment inferences based on a measure of evidence. A long-standing anomalous example is resolved by this approach to inference and an application is made to a practical problem of considerable importance, which, among other novel aspects of the analysis, involves the development of a relevant elicitation algorithm. Full article
Figures

Figure 1

Open AccessEditorial
Transfer Entropy
Entropy 2018, 20(4), 288; doi:10.3390/e20040288 -
Abstract
Statistical relationships among the variables of a complex system reveal a lot about its physical behavior[...] Full article
Open AccessArticle
Centered and Averaged Fuzzy Entropy to Improve Fuzzy Entropy Precision
Entropy 2018, 20(4), 287; doi:10.3390/e20040287 -
Abstract
Several entropy measures are now widely used to analyze real-world time series. Among them, we can cite approximate entropy, sample entropy and fuzzy entropy (FuzzyEn), the latter one being probably the most efficient among the three. However, FuzzyEn precision depends on the number
[...] Read more.
Several entropy measures are now widely used to analyze real-world time series. Among them, we can cite approximate entropy, sample entropy and fuzzy entropy (FuzzyEn), the latter one being probably the most efficient among the three. However, FuzzyEn precision depends on the number of samples in the data under study. The longer the signal, the better it is. Nevertheless, long signals are often difficult to obtain in real applications. This is why we herein propose a new FuzzyEn that presents better precision than the standard FuzzyEn. This is performed by increasing the number of samples used in the computation of the entropy measure, without changing the length of the time series. Thus, for the comparisons of the patterns, the mean value is no longer a constraint. Moreover, translated patterns are not the only ones considered: reflected, inversed, and glide-reflected patterns are also taken into account. The new measure (so-called centered and averaged FuzzyEn) is applied to synthetic and biomedical signals. The results show that the centered and averaged FuzzyEn leads to more precise results than the standard FuzzyEn: the relative percentile range is reduced compared to the standard sample entropy and fuzzy entropy measures. The centered and averaged FuzzyEn could now be used in other applications to compare its performances to those of other already-existing entropy measures. Full article
Figures

Figure 1

Open AccessArticle
A Co-Opetitive Automated Negotiation Model for Vertical Allied Enterprises Teams and Stakeholders
Entropy 2018, 20(4), 286; doi:10.3390/e20040286 -
Abstract
Upstream and downstream of supply chain enterprises often form a tactic vertical alliance to enhance their operational efficiency and maintain their competitive edges in the market. Hence, it is critical for an alliance to collaborate over their internal resources and resolve the profit
[...] Read more.
Upstream and downstream of supply chain enterprises often form a tactic vertical alliance to enhance their operational efficiency and maintain their competitive edges in the market. Hence, it is critical for an alliance to collaborate over their internal resources and resolve the profit conflicts among members, so that the functionality required by stakeholders can be fulfilled. As an effective solution, automated negotiation for the vertical allied enterprises team and stakeholder will sufficiently make use of emerging team advantages and significantly reduce the profit conflicts in teams with grouping decisions rather than unilateral decisions by some leader. In this paper, an automated negotiation model is designed to describe both the collaborative game process among the team members and the competitive negotiation process between the allied team and the stakeholder. Considering the co-competitiveness of the vertical allied team, the designed model helps the team members making decision for their own sake, and the team counter-offers for the ongoing negotiation are generated with non-cooperative game process, where the profit derived from negotiation result is distributed with Shapley value method according to contribution or importance contributed by each team member. Finally, a case study is given to testify the effectiveness of the designed model. Full article
Figures

Figure 1

Open AccessComment
Maximum Entropy and Theory Construction: A Reply to Favretti
Entropy 2018, 20(4), 285; doi:10.3390/e20040285 -
Abstract
In the maximum entropy theory of ecology (METE), the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy
[...] Read more.
In the maximum entropy theory of ecology (METE), the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory. Full article