E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Permutation Entropy & Its Interdisciplinary Applications"

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: 31 July 2018

Special Issue Editor

Guest Editor
Dr. Osvaldo Anibal Rosso

1. Departamento de Bioingeniería, Insitituto Tecnológico de Buenos Aires (ITBA), C1106ACD Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires, Argentina
2. Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970 Maceió, Alagoas, Brazil
3. Facultad de Ingeniería y Ciencias Aplicadas, Universidad de Los Andes, Santiago, Chile
E-Mail
Interests: time-series analysis; information theory; time–frequency transform; wavelet transform; entropy and complexity; non-linear dynamics and chaos; complex networks, medical and biological applications

Special Issue Information

Dear Colleagues,

Physics, as well as other scientific disciplines, such as biology or finance, can be considered observational sciences, that is, they try to infer properties of an unfamiliar system from the analysis of a measured time record of its behavior (time series). Dynamical systems are systems that evolve in time. In practice, in general, one may only be able to measure a scalar time series X(t) which may be a function of variables V = {v1, v2, …, vk} describing the underlying dynamics (i.e., dV/dt = f(V)). Then, the natural question is, how much we can learn from X(t) about the dynamics of the system. In a more formal way, given a system, be it natural or man-made, and given an observable of such a system whose evolution can be tracked through time, a natural question arises: how much information is this observable encoding about the dynamics of the underlying system? The information content of a system is typically evaluated via a probability distribution function (PDF) P describing the apportionment of some measurable or observable quantity, generally a time series X(t) = {xt, t =1, …, M}. Quantifying the information content of a given observable is therefore largely tantamount to characterizing its probability distribution. This is often done with a wide family of measures called Information Theory quantifiers (i.e., Shannon entropy and generalized entropy forms, relative entropies, Fisher information, statistical complexity, etc.). We can define Information Theory quantifiers as measures able to characterize relevant properties of the PDF associated with these time series, and in this way we should judiciously extract information on the dynamical system under study.

The evaluation of the Information Theory quantifiers supposes some prior knowledge about the system; specifically, a probability distribution associated to the time series under analysis should be provided beforehand. The determination of the most adequate PDF is a fundamental problem because the PDF P and the sample space Ω are inextricably linked. Usual methodologies assign a symbol from a finite alphabet A to each time point of the series X(t), thus creating a symbolic sequence that can be regarded to as a non causal coarse grained description of the time series under consideration. As a consequence, order relations and the time scales of the dynamics are lost. The usual histogram technique corresponds to this kind of assignment. Causal information may be duly incorporated if information about the past dynamics of the system is included in the symbolic sequence, i.e., symbols of alphabet A are assigned to a portion of the phase-space or trajectory.

Many methods have been proposed for a proper selection of the probability space (Ω, P). Among others, of non causal coarse grained type, we can mention frequency counting, procedures based on amplitude statistics, binary symbolic dynamics, Fourier analysis, or wavelet transform. The suitability of each of the proposed methodologies depends on the peculiarity of data, such as stationarity, length of the series, the variation of the parameters, the level of noise contamination, etc. In all these cases, global aspects of the dynamics can be somehow captured, but the different approaches are not equivalent in their ability to discern all relevant physical details.

In a seminal paper, Bandt and Pompe (BP) [Permutation Entropy: A Natural Complexity Measure for Time Series. Phys. Rev. Lett. 1972, 88, 174102] introduced a simple and robust symbolic methodology that takes into account the time causality of the time series (causal coarse grained methodology) by comparing neighboring values in a time series. The symbolic data are (i) created by ranking the values of the series; and (ii) defined by reordering the embedded data in ascending order, which is tantamount to a phase space reconstruction with embedding dimension (pattern length) D ≥ 2, D ∈ ℕ and time lag τ ∈ ℕ. In this way, it is possible to quantify the diversity of the ordering symbols (patterns) derived from a scalar time series. Note that the appropriate symbol sequence arises naturally from the time series, and no model-based assumptions are needed. In fact, the necessary “partitions” are devised by comparing the order of neighboring relative values rather than by apportioning amplitudes according to different levels. This technique, as opposed to most of those in current practice, takes into account the temporal structure of the time series generated by the physical process under study. As such, it allows us to uncover important details concerning the ordinal structure of the time series and can also yield information about temporal correlation.

It is clear that this type of analysis of a time series entails losing details of the original series' amplitude information. Nevertheless, by just referring to the series' intrinsic structure, a meaningful difficulty reduction has indeed been achieved by BP with regard to the description of complex systems. The symbolic representation of time series by recourse to a comparison of consecutive (τ = 1 ) or nonconsecutive (τ > 1 ) values allows for an accurate empirical reconstruction of the underlying phase-space, even in the presence of weak (observational and dynamic) noise. Furthermore, the ordinal patterns associated with the PDF are invariant with respect to nonlinear monotonous transformations. Accordingly, nonlinear drifts or scaling artificially introduced by a measurement device will not modify the estimation of quantifiers, a nice property if one deals with experimental data. These advantages make the BP methodology more convenient than conventional methods based on range partitioning, i.e., a PDF based on histograms.

Additional advantages of the method reside in (i) its simplicity (it requires few parameters: the pattern length/embedding dimension D and the time lag τ, and (ii) and the extremely fast nature of the calculation process. The BP methodology can be applied not only to time series representative of low dimensional dynamical systems, but also to any type of time series (regular, chaotic, noisy, or reality based). In fact, the existence of an attractor in the D-dimensional phase space is not assumed. The only condition for the applicability of the BP method is a very weak stationary assumption: for kD, the probability for xt < xt+k should not depend on t.

In summary, the Bandt–Pompe proposal for associating probability distributions to time series (of an underlying symbolic nature) constitutes a significant advance in the study of complex dynamical systems, as well as a clear improvement in the quality of Information Theory-based quantifiers. The power and usefulness of the Bandt–Pompe approach has been validated in many subsequent papers, as shown by the fast increment of the number of citations of the cornerstone paper through time. Many extensions of the original methodogy have been proposed in order to include the time series amplitude in the patterns’ contributions, as well as extensions for multichanel time series, amongh others. The Bandt–Pompe permutation PDF applications include a great variety of fields such as nonlinear dynamics and stochastic system descriptions; physics of lasers; mechanical engineering; plasma physics; climate time series; econophysics; neural dynamics; brain activity and epilepsy; electrocardiogram; and anesthesia, to cite just some of the many interdisciplinary applications.

Dr. Osvaldo Anibal Rosso
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Information Theory Quantifiers
  • Time Causality
  • Permutation Entropy
  • Interdisciplinary Applications

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Open AccessArticle Permutation Entropy: Too Complex a Measure for EEG Time Series?
Entropy 2017, 19(12), 692; doi:10.3390/e19120692 (registering DOI)
Received: 16 November 2017 / Revised: 11 December 2017 / Accepted: 13 December 2017 / Published: 16 December 2017
PDF Full-text (1364 KB) | HTML Full-text | XML Full-text
Abstract
Permutation entropy (PeEn) is a complexity measure that originated from dynamical systems theory. Specifically engineered to be robustly applicable to real-world data, the quantity has since been utilised for a multitude of time series analysis tasks. In electroencephalogram (EEG) analysis, value changes of
[...] Read more.
Permutation entropy (PeEn) is a complexity measure that originated from dynamical systems theory. Specifically engineered to be robustly applicable to real-world data, the quantity has since been utilised for a multitude of time series analysis tasks. In electroencephalogram (EEG) analysis, value changes of PeEn correlate with clinical observations, among them the onset of epileptic seizures or the loss of consciousness induced by anaesthetic agents. Regarding this field of application, the present work suggests a relation between PeEn-based complexity estimation and spectral methods of EEG analysis: for ordinal patterns of three consecutive samples, the PeEn of an epoch of EEG appears to approximate the centroid of its weighted power spectrum. To substantiate this proposition, a systematic approach based on redundancy reduction is introduced and applied to sleep and epileptic seizure EEG. The interrelation demonstrated may aid the interpretation of PeEn in EEG, and may increase its comparability with other techniques of EEG analysis. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Random Walk Null Models for Time Series Data
Entropy 2017, 19(11), 615; doi:10.3390/e19110615
Received: 6 October 2017 / Revised: 10 November 2017 / Accepted: 13 November 2017 / Published: 15 November 2017
PDF Full-text (960 KB) | HTML Full-text | XML Full-text
Abstract
Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random
[...] Read more.
Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random walk data. We analyze random walk null models for correlated time series and describe a method for determining the corresponding ordinal pattern distributions. These null models more accurately reflect the observed pattern distributions in some economic data. This leads us to define a measure of complexity using the deviation of a time series from an associated random walk null model. We demonstrate the applicability of our methods using empirical data drawn from a variety of fields, including to a variety of stock market closing prices. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Complexity-Entropy Maps as a Tool for the Characterization of the Clinical Electrophysiological Evolution of Patients under Pharmacological Treatment with Psychotropic Drugs
Entropy 2017, 19(10), 540; doi:10.3390/e19100540
Received: 26 July 2017 / Revised: 5 October 2017 / Accepted: 6 October 2017 / Published: 13 October 2017
PDF Full-text (1437 KB) | HTML Full-text | XML Full-text
Abstract
In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this
[...] Read more.
In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this problem. In this work, we introduce a novel method—the permutation Lempel–Ziv Complexity vs. Permutation Entropy map. We apply this method to the EEGs of two patients with specific diagnosed pathologies during respective follow up processes of pharmacological changes in order to detect alterations that are not evident with the usual inspection method. The method allows for comparing between different states of the patients’ treatment, with a healthy control group, given global information about the signal, supplementing the traditional method of visual inspection of EEG. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Characterizing Complexity Changes in Chinese Stock Markets by Permutation Entropy
Entropy 2017, 19(10), 514; doi:10.3390/e19100514
Received: 23 August 2017 / Revised: 20 September 2017 / Accepted: 20 September 2017 / Published: 24 September 2017
PDF Full-text (882 KB) | HTML Full-text | XML Full-text
Abstract
Financial time series analyses have played an important role in developing some of the fundamental economic theories. However, many of the published analyses of financial time series focus on long-term average behavior of a market, and thus shed little light on the temporal
[...] Read more.
Financial time series analyses have played an important role in developing some of the fundamental economic theories. However, many of the published analyses of financial time series focus on long-term average behavior of a market, and thus shed little light on the temporal evolution of a market, which from time to time may be interrupted by stock crashes and financial crises. Consequently, in terms of complexity science, it is still unknown whether the market complexity during a stock crash decreases or increases. To answer this question, we have examined the temporal variation of permutation entropy (PE) in Chinese stock markets by computing PE from high-frequency composite indies of two stock markets: the Shanghai Stock Exchange (SSE) and the Shenzhen Stock Exchange (SZSE). We have found that PE decreased significantly in two significant time windows, each encompassing a rapid market rise and then a few gigantic stock crashes. One window started in the middle of 2006, long before the 2008 global financial crisis, and continued up to early 2011. The other window was more recent, started in the middle of 2014, and ended in the middle of 2016. Since both windows were at least one year long, and proceeded stock crashes by at least half a year, the decrease in PE can be invaluable warning signs for regulators and investors alike. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Pretreatment and Wavelength Selection Method for Near-Infrared Spectra Signal Based on Improved CEEMDAN Energy Entropy and Permutation Entropy
Entropy 2017, 19(7), 380; doi:10.3390/e19070380
Received: 26 June 2017 / Revised: 14 July 2017 / Accepted: 22 July 2017 / Published: 24 July 2017
Cited by 1 | PDF Full-text (2085 KB) | HTML Full-text | XML Full-text
Abstract
The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used
[...] Read more.
The noise of near-infrared spectra and spectral information redundancy can affect the accuracy of calibration and prediction models in near-infrared analytical technology. To address this problem, the improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and permutation entropy (PE) were used to propose a new method for pretreatment and wavelength selection of near-infrared spectra signal. The near-infrared spectra of glucose solution was used as the research object, the improved CEEMDAN energy entropy was then used to reconstruct spectral data for removing noise, and the useful wavelengths are selected based on PE after spectra segmentation. Firstly, the intrinsic mode functions of original spectra are obtained by improved CEEMDAN algorithm. The useful signal modes and noisy signal modes were then identified by the energy entropy, and the reconstructed spectral signal is the sum of useful signal modes. Finally, the reconstructed spectra were segmented and the wavelengths with abundant glucose information were selected based on PE. To evaluate the performance of the proposed method, support vector regression and partial least square regression were used to build the calibration model using the wavelengths selected by the new method, mutual information, successive projection algorithm, principal component analysis, and full spectra data. The results of the model were evaluated by the correlation coefficient and root mean square error of prediction. The experimental results showed that the improved CEEMDAN energy entropy can effectively reconstruct near-infrared spectra signal and that the PE can effectively solve the wavelength selection. Therefore, the proposed method can improve the precision of spectral analysis and the stability of the model for near-infrared spectra analysis. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Back to Top