Next Article in Journal
Heider Balance—A Continuous Dynamics
Previous Article in Journal
Signal Detection Based on Separable CNN for OTFS Communication Systems
Previous Article in Special Issue
Ordinal Random Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Including the Magnitude Variability of a Signal in the Ordinal Pattern Analysis

1
Living Systems Institute, University of Exeter, Exeter EX4 4QD, UK
2
Department of Mathematics and Statistics, Faculty of Environment, Science, and Economy, University of Exeter, Exeter EX4 4QD, UK
3
Neuroscience Institute, Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
4
Institute for Complex Systems and Mathematical Biology, University of Aberdeen, King’s College, Aberdeen AB24 3UE, UK
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(8), 840; https://doi.org/10.3390/e27080840
Submission received: 22 May 2025 / Revised: 12 July 2025 / Accepted: 23 July 2025 / Published: 7 August 2025
(This article belongs to the Special Issue Ordinal Patterns-Based Tools and Their Applications)

Abstract

One of the most popular and innovative methods to analyse signals is by using Ordinal Patterns (OPs). The OP encoding is based on transforming a (univariate) signal into a symbolic sequence of OPs, where each OP represents the number of permutations needed to order a small subset of the signal’s magnitudes. This implies that OPs are conceptually clear, methodologically simple to implement, and robust to noise, and that they can be applied to short signals. Moreover, they simplify the statistical analyses that can be carried out on a signal, such as entropy and complexity quantifications. However, because of the relative ordering, information about the magnitude of the signal at each timestamp is lost—this being one of the major drawbacks of this method. Here, we propose a way to use the signal magnitudes discarded in the OP encoding as a complementary variable to its permutation entropy. To illustrate our approach, we analyse synthetic trajectories from logistic and Hénon maps—with and without added noise—and real-world signals, including intracranial electroencephalographic recordings from rats in different sleep-wake states and frequency fluctuations in power grids. Our results show that, when complementing the permutation entropy with the variability in the signal magnitudes, the characterisation of these signals is improved and the results remain explainable. This implies that our approach can be useful for feature engineering and improving AI classifiers, as typical machine learning algorithms need complementary signal features as inputs to improve classification accuracy.

1. Introduction

Since the beginning of the century, the boundaries of data mining have been pushed due to the growing ability to obtain larger and more precise data sets. With increasing data availability, we need to improve how we extract, manage, and analyse data [1] to uncover the underlying mechanisms that generate the data or to quantify its uncertainty.
An entropy measures the average contents of the information, where information is understood as the degree of uncertainty in an outcome (as defined by Shannon [2]). If an outcome is highly unlikely to happen, then it carries significant information because it would be surprising to record it, such as the presence of an outlier or an artefact in a signal. However, if an outcome is highly likely to happen, then it carries insignificant information because one would expect it to appear, such as a periodic signal. Hence, entropy is highest when any outcome is equally likely to happen, corresponding to a uniform probability distribution that conveys the maximum uncertainty regarding all possible outcomes [2].
One of the most successful entropy methods introduced to characterise signals is the permutation entropy [3]. Permutation entropy quantifies the average content of information in an Ordinal Pattern (OP) sequence, which is obtained from the signal by dividing it into a series of embedded vectors [3,4]. Each OP represents the number of permutations needed to order the signal magnitudes within each embedded vector. The resultant symbolic sequence is used to find the OP probabilities distribution. It is easier to perform statistical quantifications—known as OP analysis—with these probabilities than from the original signal, such as quantifying the uncertainty and complexity of the signal [5,6,7].
Due to its simplicity and robustness to noise, OP analysis (along with complexity calculations) has had remarkable success [8,9,10], being used to distinguish between chaotic and stochastic signals [11,12,13,14,15,16,17,18], as well as to characterise electrophysiological signals [19,20,21,22,23,24,25,26], laser dynamics [27,28,29,30,31,32,33,34], climate systems [35,36,37,38,39,40,41,42], and financial trends [43,44,45,46,47,48,49,50], to name a few. However, one of the main drawbacks in OP analysis is that the magnitude of the signal at each timestamp is discarded, solely keeping the ordinal relationship between the signal magnitudes.
To include this missing information, previous works have proposed modifications of the permutation entropy, such as modified permutation-entropy [51], weighted-permutation entropy [52], amplitude-aware permutation entropy [53], improved permutation entropy [54], and continuous ordinal patterns [55]. These methods and approaches introduce ad hoc assumptions that are supported by the effectiveness of the resultant modification to the permutation entropy measure in improving the characterisation of different datasets, but lack a theoretical framework that can validate their usage, limits, and scalability in general scenarios.
Here, we propose an alternative approach, which is to include the standard deviation of the signal magnitudes in the OP embedded vectors as a complementary variable to the permutation entropy—specifically, the OP-averaged logarithm of the standard deviation of the magnitudes in the ordinal pattern embedded vectors. The formalism behind our approach is justified in the works of Politi [56,57], who showed that this OP-averaged standard deviation of the signal magnitudes is needed—along with the information dimension of the system [58]—to make the permutation entropy of a signal tend towards its Kolmogorov–Sinai (KS) entropy [59,60]. KS entropy is a rigorously defined observable with invariant characteristics, contrary to permutation entropy, which can depend on the signal length and embedding choice. This provides fundamental ground to our approach, which, instead of modifying the permutation entropy, evaluates two easily accessible contributions to the Kolmogorov–Sinai entropy.
Because this tendency towards Kolmogorov–Sinai entropy [59,60] is only achieved when using the information dimension of the signal (which is tricky to find in finite real-world signals), we propose using this OP-averaged quantity of the standard deviation as a complementary variable to the permutation entropy value instead of a measure that combines both, discarding the need to find the information dimension and increasing the explainability of our results. In particular, we show that signal characterisation can be improved when using these standard deviations to complement the permutation entropy analysis, where we focus mostly on calculating the Rényi min-entropy [61]. Our conclusions are based on analysing numerically generated trajectories from coupled logistic [62,63,64] and Hénon [65] maps (with and without observational noise), real-world signals from intracranial electroencephalographic recordings of rats in different sleep–wake states, and frequency fluctuations from 4 locations in the European power grid.

2. Materials and Methods

2.1. Signals: General Notions

We only consider digital signals, i.e., where time is discrete and the magnitudes are quantised. These signals can be numerically generated (synthetic) or experimentally measured, where their digital nature is due to the precision of the computer or to analogue-to-digital converters, respectively. The signals we analyse come from a pair of coupled logistic maps, a Hénon map,—both being synthetic bi-variate trajectories—intracranial electroencephalographic (EEG) recordings from rats and frequency recordings at 4 locations in the European power grid
We write a signal as { x t } t = 1 T = { x 1 , x 2 , , x T } , where x t is the magnitude at the discrete time index t N , x 1 is the initial state, and T is the length of the signal. A signal can be resampled using an embedding delay τ N , such that { x 1 , x 2 , , x T } { x 1 , x 1 + τ , , x 1 + n τ } , where n = ( T 1 ) / τ is the smallest integer closest to ( T 1 ) / τ . This resampling can filter the high frequencies in a signal, but we set τ = 1 for all our analyses.

2.2. Synthetic Models: Map Iterates

We generate bi-variate signals from coupled, identical, logistic maps by iterating the following equations:
x t + 1 = 1 ε f ( x t ) + ε f ( y t ) , y t + 1 = 1 ε f ( y t ) + ε f ( x t ) ,
where f ( z ) = r z ( 1 z ) is the logistic mapping (with z = x t or y t for t = 1 , , T ), r ( 3 , 4 ] R is the control parameter, and ε [ 0 , 1 ] R is the coupling strength between the maps. When ε = 0 in Equation (1), the x and y maps are decoupled, i.e., they are isolated. As r is increased from r = 3 to r = 4 , an isolated logistic map undergoes a series of period-doubling bifurcations, taking the solutions from periodic to chaotic trajectories [62]. When 0 < ε 1 , the maps are coupled and the resultant trajectories can become more complex (including intermittent and hysterical behaviours) [63,64].
The Hénon map is given by [65]
x t + 1 = 1 a x t 2 + y t , y t + 1 = b x t ,
where a and b are the control parameters, which, depending on their values, can generate periodic (e.g., when a = 1.0 and b = 0.3 ) or chaotic (e.g., when a = 1.4 and b = 0.3 ) trajectories.
For both maps (Equations (1) and (2)), we use the OP analysis of the x component for different control parameter values (i.e., r, ε , and a), fixing its length to T = 10 6 after removing a transient of δ t = 10 3 iterations from the initial condition x 1 = 0.65 , y 1 = 0.44 . In this way, we discard the bi-variate nature of these maps and focus on uni-variate signals. In Appendix A, we show the results when we analyse the y component instead. To analyse the effects of observational noise, we add white Gaussian noise to these signals by independently drawing identically distributed random numbers from a normal distribution and changing its strength (i.e., its standard deviation).

2.3. Animal Model: EEG Recordings

We use EEG recordings from 11 healthy and freely moving Wistar rats during their natural sleep–wake cycle, who can access food and water within the (sound-attenuated and Faraday-shielded) recording box. These rats have intracranially implanted electrodes, monitoring active wakefulness (AW), rapid eye movement (REM) sleep, and non-REM sleep. Details on the surgical procedure and experimental conditions can be found in Refs. [20,21,22]. The experiments are in agreement with Uruguay’s National Animal Care Law (No. 18611) and with the “Guide to the care and use of laboratory animals” [66]. These experiments were approved by the Institutional Animal Care Committee (Comisión de Etica en el Uso de Animales), Exp. No. 070153-000332-16.
The EEG signals we analyse are obtained by obtaining the differences between the electrode of interest and the Cerebellum (the reference). We focus on five electrodes, those bilaterally placed above the primary motor (M1) and somatosensory (S1) cortices, plus the right olfactory bulb (OB), discarding the two electrodes from the secondary visual (V2) cortex (because they are too close to the Cerebellum, increasing the relevance of observational noise). These EEGs have a sampling frequency of 1024 Hz and a resolution of 16 bits. To remove the degeneracies in the signal magnitudes due to the analogue-to-digital converter, we add white noise with an amplitude given by the range of the (electrode-dependent) EEG times 2 16 .
AW is defined by low-voltage fast waves in M1, a strong theta rhythm in S1 (4–7 Hz), and relatively high electromyographic activity. REM sleep is defined by low-voltage fast-frontal waves, a regular theta rhythm in S1, and silent electromyography (excluding occasional twitches). NREM sleep is determined by the presence of high-voltage slow-cortical waves (1–4 Hz), sleep spindles in M1 and S1, and a reduction in electromyographic amplitudes. Additionally, a visual scoring is performed to discard artefacts and transitional states. The EEGs for these states are concatenations of artefact-free 10 s windows that meet each state criteria. To analyse them, we fix their lengths to T = 90 × 1024 , which is the shortest length of the concatenated EEGs.

2.4. Grid Model: Frequency Recording

We consider voltage frequency recordings at 4 different locations in the European synchronous electric power grid [67]. Under normal operations, the frequency of the grid fluctuates at around 50 Hz, due to the mismatch between power production and consumption. These fluctuations impact all the buses in the grid that follow the same overall variation around 50 Hz. On top of this common trend, additional signals coming from the grid dynamics or the spreading of disturbances might impact each bus differently. Two of the recordings were made in Germany, one in Karlsruhe, and the other in Oldenburg. The two other recordings were from Lisbon, Portugal, and Istanbul, Turkey. The overall datasets consist of 4 time-series of about 41 days of synchronous frequency recordings sampled at 1 Hz between July and August, 2019 [67]. For our analysis, we chose five different 24 h intervals, each of length T = 86,400.

2.5. Method: Encoding Signals into Ordinal Patterns

We followed Bandt–Pompe’s method [3] to encode uni-variate signals.
First, we divide the signal into quasi-non-overlapping vectors with D components, where D > 1 is a natural number known as the embedding dimension. That is, we transform the signal { x t } t = 1 T to a series of vectors { x 1 , , x D } , { x D , , x 2 D 1 } , { x 2 D 1 , , x 3 D 2 , , { x ( m 1 ) ( D 1 ) + 1 , , x m ( D 1 ) } , where m = T / ( D 1 ) is the largest integer closest to T / ( D 1 ) . Then, we transform each vector into an integer, ranging from 1 to D ! , according to the number of permutations needed to order their elements in an increasing fashion (plus 1). These are known as Ordinal Patterns (OPs), and the overall process encodes the signal into a symbolic sequence that preserves the local relationships between consecutive time-points but discards their magnitudes.
For example, when D = 2 , if x 1 < x 2 , then { x 1 , x 2 } 1 , because no permutations are needed to order the two-element vector. If x 1 > x 2 , then { x 1 , x 2 } 2 , because we need one permutation. In total, there are D ! = 2 possible permutations between the components of the D = 2 embedded vector. We encode all our signals using D = 3 , 4 , or 5, which implies symbolic sequences with a maximum of D ! = 6 , 24 , or 120 different OPs, respectively. For the length T = 90 × 1024 of the EEG signals, the average frequency of the appearance of any given OP when D = 5 is approximately T / D ! = 768 (being higher for D < 5 ). A similar number is obtained for the power grid frequency recordings. Consequently, we have high statistical power when finding the marginal probability distribution of the OPs and the other statistical measures.

2.6. Statistical Measures: Permutation Entropy, Rényi Min-Entropy, and Magnitude Variability

The information content of an OP is log 1 / P ( α ) (as defined by Shannon [2]), where P ( α ) is the probability of having OP α (such that α = 1 D ! P ( α ) = 1 ). The Shannon entropy [2] of the OP sequence, H, is known as the permutation entropy [3], and is found from
H = α = 1 D ! P ( α ) log 2 1 P ( α ) = log 2 1 P ( α ) ,
with · serving as the mean with respect to the probability distribution { P ( α ) } α = 1 D ! = { P ( 1 ) , , P ( D ! ) } . The maximum value of H is H max = log 2 ( D ! ) , which is known as the Hartley or max-entropy and is achieved if and only if P ( α ) = 1 / D ! for all α (a uniform distribution). We use log 2 in Equation (3) so that the unit is the bit.
To improve the differences in the permutation entropy values of { P ( α ) } α = 1 D ! when it is close to the uniform distribution, we use the Rényi min-entropy (in bits), H , which is defined by [61]
H = lim q 1 1 q log 2 α = 1 D ! P q ( α ) = min α log 2 1 P ( α ) = log 2 max α { P ( α ) } .
This means that the information content of any OP sequence is bounded between H max and H .
To quantify the variability of the signal magnitudes within the OPs, we use
log 2 σ j = α = 1 D ! P ( α ) log 2 σ j ( α ) ,
where · is the mean with respect to the OP probability distribution (as in Equation (3)) and σ j ( α ) is the standard deviation of the signal magnitudes at the j-th component (with j = 1 , , D ) of all the embedded vectors that correspond to the OP symbol α (with α = 1 , , D ! ) [56,57]. An example of the resultant values of Equation (5) for an EEG of the right primary motor cortex (rM1) of a representative rat during AW is shown in Figure 1.
We note that the variation in the values of log 2 σ α ( j ) for different entries of j is minimal, as illustrated in Figure 1. Therefore, we work with the average value (but any choice of j would hold similar results and our conclusions would remain unchanged). Namely,
avg j { log 2 σ j } = 1 D j = 1 D log 2 σ j .
Consequently, when using Equation (3) [or Equation (4)] we can quantify the average [minimum] information content in an OP sequence, but lose the information from the magnitudes that compose the embedded vectors forming the OPs. In contrast, using Equation (6) we can quantify the average magnitude variability of these embedded vectors, complementing the information provided by H [or H ].

3. Results

3.1. Ordinal Pattern Analysis of the Noiseless Map Iterates

From the analysis of the OP sequences for the coupled logistic maps (Equation (1)) and Hénon map (Equation (2)) for different parameters, we can see that when the dynamics change slightly, the min-entropy H (Equation (4)) can have small variations, but the average magnitude variability avg j { log 2 σ j } (Equation (6)) can change drastically.
Figure 2 shows that when the coupling strength is ε = 0.01 , the red circle ( r = 3.60 ) and the cyan diamond ( r = 3.80 ) have similar H values, close to 2 and 2.05 bits (vertical axis), respectively. In contrast, their avg j { log 2 σ j } values differ by an order of magnitude, being close to 5.38 and 4.22 (horizontal axis), respectively. Similarly, when ε = 0.2 , the red triangle ( r = 3.60 ) and the black star ( r = 3.75 ) have an avg j { log 2 σ j } differing by an order of magnitude (horizontal axis) but similar H values, which are close to 1.92 and 1.95 bits (vertical axis), respectively.
However, the opposite effect is also observed in Figure 2. For example, when ε = 0.01 , the black square ( r = 3.75 ) and the yellow triangle pointing upward ( r = 3.9 ) have significantly different H values (close to 1.82 and 2.62 bits, respectively), but their avg j { log 2 σ j } is similar (close to 4.35 and 4.40 , respectively). The same happens for ε = 0.2 , where the black star ( r = 3.75 ) and the yellow triangle pointing downward ( r = 3.9 ) have similar avg j { log 2 σ j } values (close to 4.55 and 4.50 , respectively) but significantly different H (close to 1.94 and 2.60 bits, respectively).
The dynamics of the coupled logistic maps shown in Figure 2 follow the bifurcation-like diagram of Figure 3. We can see that the dynamical changes driving the similarities (or dissimilarities) in H or avg j { log 2 σ j } values in Figure 2 are nearly unnoticeable in Figure 3 by performing a direct visual inspection. This implies that the dynamical changes must be happening at smaller scales than the length of the signal, which is likely why the OP encoding is able to capture the different chaoticities (i.e., different H values) emerging for the different control parameters and local changes in the magnitude of the x t signals (i.e., different avg j { log 2 σ j } values). Nevertheless, these dynamical changes are revealed by the maximum Lyapunov exponent (MLE) of the bidimensional system [68], as can be seen from Table 1.
We draw similar conclusions when analysing the Hénon map. For example, the red circle ( a = 1.15 ) and the black square ( a = 1.20 ) in Figure 4 have similar H values (close to 1.83 and 1.85 bits, respectively), but have different avg j { log 2 σ j } : close to 2.85 for the red circle and 2.68 for the black square, respectively.
In these cases ( a = 1.15 , a = 1.20 , a = 1.34 , a = 1.35 , a = 1.40 , and a = 1.405 ), the dynamics of the map have a chaotic regime, with minimal apparent differences (similarly to Figure 3)—this can be corroborated using the bifurcation diagram of the Hénon map (see [69] for a bifurcation diagram). However, as a is increased for the analysed values, the chaoticity of the signal is also increased. This can be quantified using the Lyapunov exponents of the system (as Table 2 shows), and as Figure 4 shows, it can also be quantified by using H and avg j { log 2 σ j } .
Overall, these results (Figure 2 and Figure 4) show that using the OP Rényi min-entropy in conjunction with the average signal variability per OP improves the characterisation of the underlying map dynamics. Moreover, we can see that these results (Figure 2 and Figure 4) scale with the noise and the use of different embedding dimensions D because avg j { log 2 σ j } follows a power-law behaviour as a function of D τ , with an exponent that depends on the noise strength (see [56,57] for details), which can be useful to distinguish between chaotic and stochastic signals. This is corroborated in Figure 5 and Figure 6 for the coupled logistic maps and Hénon map, respectively. Moreover, these results remain invariant when using the other coordinate, as Figure A1 and Figure A2 show in the Appendix A, which is expected from Takens embedding theorem [70].

3.2. Ordinal Pattern Analysis of the EEG Recordings

The sleep–wake states of active wakefulness (AW), rapid-eye movement (REM) sleep, and non-REM (NREM) sleep have different electrophysiological characteristics (see Section 2.3). However, when focusing solely on frequency or permutation entropy analyses, some of these differences are lost [20,21,22]. Here, we show that using the average signal variability per OP ( avg j { log 2 σ j } ) improves the differentiation between the states, even when accounting for natural inter-animal variability.
The results of the EEGs for the three sleep–wake states coming from the right OB, M1, and S1 of 11 rats are shown in Figure 7—we can find similar results for the left hemisphere’s M1 and S1 cortices. Panels B (OB), C (M1), and D (S1) show that AW has the highest value of H for all animals—with two exceptions, in the S1 cortex (panel D, for red filled circles)—but mid-range values of avg j { log 2 σ j } . This implies that the Rényi min-entropy can distinguish considerably well between wakefulness and sleep states. However, the values of H for REM and NREM are similar for all electrodes.
In contrast, the values of avg j { log 2 σ j } for REM and NREM sleep differ by approximately an order of magnitude in most animals. This is in accordance with the type of waves present in these sleep states: REM has low-voltage fast-frontal waves but NREM has high-voltage slow-cortical waves (see Section 2.3). Consequently, the AW, REM, and NREM states are fairly differentiated when using H and avg j { log 2 σ j } simultaneously. We note that this differentiation is improved as the cortical location considered is further away from the reference electrode, which in our case is the right OB and Cerebellum, respectively. We also note that these results and conclusions remain unchanged when using H (Equation (3)) instead of H (Equation (4)).

3.3. Ordinal Pattern Analysis of Grid Frequency Recordings

The grid frequencies at all the buses in the European grid typically follow the same overall trend, which is close to 50 Hz on average [71]. This global variation is due to the mismatch between power generation and consumption, and is typically slow compared to the grid intrinsic timescales. The main differences between each recording are the fluctuations around that common trend. The Rényi min-entropy vs. average magnitude variability plane is shown in Figure 8 for 5 different 24 h recordings from four different locations in the synchronous European grid. As expected, one observes that, for each day (different symbols) the values of avg j { log 2 σ j } at the different locations (different colours) are very similar. Indeed, the amplitude of the time-series is mainly due to the variation in the common signal around 50 Hz. Interestingly, the value of H for each location seems to stay the same for different time intervals, i.e., for KA, OL, PT, and TU, respectively, at close to 2.4 bits, 2.8 bits, 3.1 bits, and 3.7 bits. This ordering of H can be understood from the locations in the grid where the recordings were performed. It was found that the frequency fluctuations at one bus from the common overall trend are determined by the inverse of its resistance centrality in the grid [72]. One therefore expects recordings at the periphery of the grid to be more impacted by noise than those closer to the centre of the grid. Both KA and OL are rather central in the grid, which suggests that these recordings are essentially in line with the common trend around 50 Hz. On the other hand, PT and TU are peripheral in the grid, which suggests that their recordings are more impacted by randomness around the common trend. Thus, the Rényi min-entropy should be smaller for KA and OL compared to PT and TU, which is what can be observed in Figure 8.

4. Discussion

For many real-world systems, the intrinsic dynamics are so complex that inferring a mathematical model for the underlying dynamical process generating signal measurements becomes a very challenging task—if feasible at all. An alternative approach is to analyse the evolution of the information contained in the signals one can measure. While this approach fails to provide a detailed model for the microscopic dynamics, if successful, then it allows for different dynamical regimes and changes in the parameters of the system to be characterised.
With this in mind, here we propose to include the magnitude variability in the signal in the ordinal pattern (OP) encoding. Our aim is to complement the permutation entropy analysis, to improve the characterisation of the dynamical behaviours observed from the time-series, and to maintain explainable results. For example, if instead of using our complementary variables, we combine them, then there would be cases when the resultant modified PE value could not tell us whether it is increased (or decreased) because the PE value increased or because the standard deviations of the amplitudes increased (or decreased).
We first tested our approach on synthetically generated signals from chaotic bidimensional mappings under different parameter values. Specifically, we used coupled logistic maps (Equation (1)) and the Hénon map (Equation (2)). We analysed the signals coming from one of the two coordinates available for these maps because the OP encoding can only be applied to univariate signals.
The main reason to choose bidimensional mappings is that there are proofs showing that the permutation entropy of one-dimensional mappings directly relates to the Kolmogorov–Sinai entropy [73,74], which is not the case for higher-dimensional mappings. Because of this difference, it is expected that the characterisation of a dynamical regime by the permutation entropy for a higher-dimensional system is incomplete. We show that this problem is mitigated by including the standard deviation of the signal (in the embedded vectors forming the OPs) to the permutation entropy quantification (Figure 2 and Figure 4). We also find that our approach improves the characterisation of the dynamical regimes even under significant levels of observational noise and different choices of embedding dimension (Figure 5 and Figure 6).
We then analysed real-world EEG signals registered intracranially from 11 rats under free conditions throughout the sleep–wake cycle. We considered these signals because they come from a system—the brain—where the underlying microscopic dynamics are unknown (i.e., we lack a differential equation that models the system), there are inherent noise sources affecting the quality of the signal measurements, and the system has been hypothesised to have some level of chaoticity [75]. Moreover, in practice, the polysomnographic classification of sleep-wake states requires highly trained professionals to recognise characteristic electrophysiological patterns that vary according to the sleep stage, plus depend on anatomy and individual variability. However, the electrophysiological variability introduced by experimental manipulations (in research settings) or disease (in clinical settings), requires that whichever automatic sleep scoring classifier used is interpretable and not a black box.
Our results show that, for most cortical locations, independently of the embedding dimension used and the natural inter-animal variability, the states of active wakefulness, rapid-eye movement (REM) sleep, and non-REM sleep can be distinguished in the plane formed by the Rényi min-entropy and signal variability (Figure 7).
Finally, we analysed frequency recordings in the European electric power grid. We found that the standard deviation of the signals at different locations is similar during the same time intervals. This is expected, as the frequency in the whole grid essentially fluctuates around a common value close to 50 Hz with similar fluctuation magnitudes. We also found that the permutation entropy is higher for recordings at the periphery of the grid compared to those closer to the centre, which can be explained by the centrality of the recorded buses. Overall, our method identified the important features of the grid frequency dynamics.
Our analyses are limited in terms of the choice of embedding delay and overlap between consecutive embedded vectors; namely, for all our results τ = 1 and the embedded vectors are quasi-non-overlapping, only sharing a single data point from the signal. The reason to restrict τ to 1 is that we can consider the entire signal, whereas τ > 1 implies consideration of sub-sampled signals. On the other hand, increasing the overlapping of the embedded vectors creates artificial correlations between consecutive OPs (which are irrelevant for most permutation entropy calculations, but can affect conditional or transfer entropy calculations). For example, with a larger overlap than the one we choose here, two consecutive embedded vectors with D = 3 would share two points, such that { x t , x t + 1 , x t + 2 } and { x t + 1 , x t + 2 , x t + 3 } . This would imply that, if x t + 1 > x t + 2 , then the OP for the second embedded vector would be conditioned to this increasing relation that is present in the previous embedded vector, and all other OP possibilities would be forbidden (i.e., the OP would be artificially forced to take on particular OP symbols). Consequently, our choice of encoding parameters allows us to keep all the signal and have a maximum number of embedded vectors with null redundancy between consecutive ones.
Finally, we note that a natural extension to our approach could consider a three-dimensional space, composed of the permutation entropy, the signal variability within the embedded vectors, and a complexity measure, such as the Jensen–Shannon complexity, which could improve the characterisation of other dynamical regimes (such as non-chaotic ones). Moreover, in practical applications where the signals are short, instead of considering the standard deviation of the signal for the embedded vectors, one could consider the inter-quartile range, which is a statistically robust descriptor that is unaffected by outliers.

Author Contributions

M.T.: Investigation, Formal Analysis, Software, Visualization, Writing (original draft). J.G.: Data Acquisition, Investigation. N.R.: Conceptualization, Methodology, Software, Writing (Original Draft). M.T., J.G. and N.R.: Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

The research by M.T. was funded by the UKRI grant number MR/X034240/1.

Institutional Review Board Statement

The animal study protocol was approved by the the Institutional Animal Care Committee (Comisión de Etica en el Uso de Animales) of the Universidad de la República, Uruguay, Exp. No. 070153-000332-16, in agreement with Uruguay’s National Animal Care Law (No. 18611) and with the “Guide to the care and use of laboratory animals” (8th edition, National Academy Press, Washington DC, 2010).

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We thank the Departamento de Fisiología de Facultad de Medicina, Universidad de la República, Montevideo, Uruguay, for providing the data set, and in particular, Matías Cavelli, Santiago Castro-Zaballa, Alejandra Mondino, and Pablo Torterolo.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OPOrdinal Patterns
EEGElectroencephalogram
AWActive Wakefulness
REMRapid-Eye Movement
NREMNon-REM

Appendix A. Additional Figures for the Logistic and Hénon Maps

Figure A1. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from two coupled identical logistic maps. The map iterates for the OP encoding are obtained from Equation (1), where the coupling strength ε is set to 0.01 or 0.2 and the map parameter r is set to 3.6 (red symbols), 3.75 (black symbols), 3.8 (cyan symbols), or 3.9 (yellow symbols). We use D = 4 and τ = 1 for the OP encoding of the iterates of the y component (one map)—see Section 2 for details.
Figure A1. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from two coupled identical logistic maps. The map iterates for the OP encoding are obtained from Equation (1), where the coupling strength ε is set to 0.01 or 0.2 and the map parameter r is set to 3.6 (red symbols), 3.75 (black symbols), 3.8 (cyan symbols), or 3.9 (yellow symbols). We use D = 4 and τ = 1 for the OP encoding of the iterates of the y component (one map)—see Section 2 for details.
Entropy 27 00840 g0a1
Figure A2. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from a Hénon map. The map iterates are obtained from Equation (2) with b = 0.3 and the other parameter set to either a = 1.15 (red circle), 1.20 (black square), 1.34 (cyan diamond), 1.35 (yellow triangle), 1.40 (black asterisk), or 1.405 (green triangle). We use D = 4 and τ = 1 for the OP encoding of the iterates of the y component (as in Figure A1)—see Section 2 for details.
Figure A2. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from a Hénon map. The map iterates are obtained from Equation (2) with b = 0.3 and the other parameter set to either a = 1.15 (red circle), 1.20 (black square), 1.34 (cyan diamond), 1.35 (yellow triangle), 1.40 (black asterisk), or 1.405 (green triangle). We use D = 4 and τ = 1 for the OP encoding of the iterates of the y component (as in Figure A1)—see Section 2 for details.
Entropy 27 00840 g0a2

References

  1. Zanin, M.; Papo, D.; Sousa, P.A.; Menasalvas, E.; Nicchi, A.; Kubik, E.; Boccaletti, S. Combining complex networks and data mining: Why and how. Phys. Rep. 2016, 635, 1–44. [Google Scholar] [CrossRef]
  2. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  3. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef] [PubMed]
  4. Amigó, J. Permutation Complexity in Dynamical Systems: Ordinal Patterns, Permutation Entropy and All That; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  5. Lamberti, P.; Martin, M.; Plastino, A.; Rosso, O. Intensive entropic non-triviality measure. Phys. A Stat. Mech. Its Appl. 2004, 334, 119–131. [Google Scholar] [CrossRef]
  6. Keller, K.; Sinn, M. Ordinal analysis of time series. Phys. A Stat. Mech. Its Appl. 2005, 356, 114–120. [Google Scholar] [CrossRef]
  7. Zunino, L.; Olivares, F.; Ribeiro, H.V.; Rosso, O.A. Permutation Jensen-Shannon distance: A versatile and fast symbolic tool for complex time-series analysis. Phys. Rev. E 2022, 105, 045310. [Google Scholar] [CrossRef]
  8. Zanin, M.; Zunino, L.; Rosso, O.A.; Papo, D. Permutation entropy and its main biomedical and econophysics applications: A review. Entropy 2012, 14, 1553–1577. [Google Scholar] [CrossRef]
  9. Leyva, I.; Martínez, J.H.; Masoller, C.; Rosso, O.A.; Zanin, M. 20 years of ordinal patterns: Perspectives and challenges. Europhys. Lett. 2022, 138, 31001. [Google Scholar] [CrossRef]
  10. Amigó, J.M.; Rosso, O.A. Ordinal methods: Concepts, applications, new developments, and challenges—In memory of Karsten Keller (1961–2022). Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 080401. [Google Scholar] [CrossRef]
  11. Amigó, J.M.; Zambrano, S.; Sanjuán, M.A. Detecting determinism in time series with ordinal patterns: A comparative study. Int. J. Bifurc. Chaos 2010, 20, 2915–2924. [Google Scholar] [CrossRef]
  12. Zunino, L.; Soriano, M.C.; Rosso, O.A. Distinguishing chaotic and stochastic dynamics from time series by using a multiscale symbolic approach. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2012, 86, 046210. [Google Scholar] [CrossRef]
  13. Unakafov, A.M.; Keller, K. Conditional entropy of ordinal patterns. Phys. D Nonlinear Phenom. 2014, 269, 94–102. [Google Scholar] [CrossRef]
  14. Bandt, C. Small order patterns in big time series: A practical guide. Entropy 2019, 21, 613. [Google Scholar] [CrossRef]
  15. Sakellariou, K.; Stemler, T.; Small, M. Markov modeling via ordinal partitions: An alternative paradigm for network-based time-series analysis. Phys. Rev. E 2019, 100, 062307. [Google Scholar] [CrossRef] [PubMed]
  16. Zanin, M.; Olivares, F. Ordinal patterns-based methodologies for distinguishing chaos from noise in discrete time series. Commun. Phys. 2021, 4, 190. [Google Scholar] [CrossRef]
  17. Zunino, L.; Soriano, M.C. Quantifying the diversity of multiple time series with an ordinal symbolic approach. Phys. Rev. E 2023, 108, 065302. [Google Scholar] [CrossRef] [PubMed]
  18. Kottlarz, I.; Parlitz, U. Ordinal pattern-based complexity analysis of high-dimensional chaotic time series. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 053105. [Google Scholar] [CrossRef]
  19. Quintero-Quiroz, C.; Montesano, L.; Pons, A.J.; Torrent, M.C.; García-Ojalvo, J.; Masoller, C. Differentiating resting brain states using ordinal symbolic analysis. Chaos Interdiscip. J. Nonlinear Sci. 2018, 28, 106307. [Google Scholar] [CrossRef]
  20. González, J.; Cavelli, M.; Mondino, A.; Pascovich, C.; Castro-Zaballa, S.; Torterolo, P.; Rubido, N. Decreased electrocortical temporal complexity distinguishes sleep from wakefulness. Sci. Rep. 2019, 9, 18457. [Google Scholar] [CrossRef]
  21. González, J.; Mateos, D.; Cavelli, M.; Mondino, A.; Pascovich, C.; Torterolo, P.; Rubido, N. Low frequency oscillations drive EEG’s complexity changes during wakefulness and sleep. Neuroscience 2022, 494, 1–11. [Google Scholar] [CrossRef]
  22. González, J.; Cavelli, M.; Tort, A.B.; Torterolo, P.; Rubido, N. Sleep disrupts complex spiking dynamics in the neocortex and hippocampus. PLoS ONE 2023, 18, e0290146. [Google Scholar] [CrossRef]
  23. Bandt, C. Statistics and contrasts of order patterns in univariate time series. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 033124. [Google Scholar] [CrossRef]
  24. Zunino, L. Revisiting the characterization of resting brain dynamics with the permutation Jensen–Shannon distance. Entropy 2024, 26, 432. [Google Scholar] [CrossRef]
  25. Boaretto, B.R.; Budzinski, R.C.; Rossi, K.L.; Masoller, C.; Macau, E.E. Spatial permutation entropy distinguishes resting brain states. Chaos Solitons Fractals 2023, 171, 113453. [Google Scholar] [CrossRef]
  26. Gancio, J.; Masoller, C.; Tirabassi, G. Permutation entropy analysis of EEG signals for distinguishing eyes-open and eyes-closed brain states: Comparison of different approaches. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 043130. [Google Scholar] [CrossRef]
  27. Rubido, N.; Tiana-Alsina, J.; Torrent, M.; Garcia-Ojalvo, J.; Masoller, C. Language organization and temporal correlations in the spiking activity of an excitable laser: Experiments and model comparison. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2011, 84, 026202. [Google Scholar] [CrossRef] [PubMed]
  28. Soriano, M.C.; Zunino, L.; Rosso, O.A.; Fischer, I.; Mirasso, C.R. Time scales of a chaotic semiconductor laser with optical feedback under the lens of a permutation information analysis. IEEE J. Quantum Electron. 2011, 47, 252–261. [Google Scholar] [CrossRef]
  29. Aragoneses, A.; Rubido, N.; Tiana-Alsina, J.; Torrent, M.; Masoller, C. Distinguishing signatures of determinism and stochasticity in spiking complex systems. Sci. Rep. 2013, 3, 1778. [Google Scholar] [CrossRef]
  30. Aragoneses, A.; Perrone, S.; Sorrentino, T.; Torrent, M.; Masoller, C. Unveiling the complex organization of recurrent patterns in spiking dynamical systems. Sci. Rep. 2014, 4, 4696. [Google Scholar] [CrossRef]
  31. Aragoneses, A.; Carpi, L.; Tarasov, N.; Churkin, D.; Torrent, M.; Masoller, C.; Turitsyn, S. Unveiling temporal correlations characteristic of a phase transition in the output intensity of a fiber laser. Phys. Rev. Lett. 2016, 116, 033902. [Google Scholar] [CrossRef]
  32. Tirabassi, G.; Duque-Gijon, M.; Tiana-Alsina, J.; Masoller, C. Permutation entropy-based characterization of speckle patterns generated by semiconductor laser light. APL Photonics 2023, 8, 126112. [Google Scholar] [CrossRef]
  33. Boaretto, B.R.; Macau, E.E.; Masoller, C. Characterizing the spike timing of a chaotic laser by using ordinal analysis and machine learning. Chaos Interdiscip. J. Nonlinear Sci. 2024, 34, 043108. [Google Scholar] [CrossRef] [PubMed]
  34. Zunino, L.; Porte, X.; Soriano, M.C. Identifying ordinal similarities at different temporal scales. Entropy 2024, 26, 1016. [Google Scholar] [CrossRef] [PubMed]
  35. Barreiro, M.; Marti, A.C.; Masoller, C. Inferring long memory processes in the climate network via ordinal pattern analysis. Chaos Interdiscip. J. Nonlinear Sci. 2011, 21, 013101. [Google Scholar] [CrossRef] [PubMed]
  36. Deza, J.I.; Barreiro, M.; Masoller, C. Inferring interdependencies in climate networks constructed at inter-annual, intra-season and longer time scales. Eur. Phys. J. Spec. Top. 2013, 222, 511–523. [Google Scholar] [CrossRef]
  37. Tupikina, L.; Rehfeld, K.; Molkenthin, N.; Stolbova, V.; Marwan, N.; Kurths, J. Characterizing the evolution of climate networks. Nonlinear Process. Geophys. 2014, 21, 705–711. [Google Scholar] [CrossRef]
  38. Deza, J.; Tirabassi, G.; Barreiro, M.; Masoller, C. Large-Scale Atmospheric Phenomena Under the Lens of Ordinal Time-Series Analysis and Information Theory Measures. In Advances in Nonlinear Geosciences; Springer: Cham, Switzerland, 2018; pp. 87–99. [Google Scholar]
  39. Dijkstra, H.A.; Hernández-García, E.; Masoller, C.; Barreiro, M. Networks in Climate; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  40. Wu, H.; Zou, Y.; Alves, L.M.; Macau, E.E.; Sampaio, G.; Marengo, J.A. Uncovering episodic influence of oceans on extreme drought events in Northeast Brazil by ordinal partition network approaches. Chaos Interdiscip. J. Nonlinear Sci. 2020, 30, 053104. [Google Scholar] [CrossRef]
  41. Ruiz-Aguilar, J.J.; Turias, I.; González-Enrique, J.; Urda, D.; Elizondo, D. A permutation entropy-based EMD–ANN forecasting ensemble approach for wind speed prediction. Neural Comput. Appl. 2021, 33, 2369–2391. [Google Scholar] [CrossRef]
  42. Gancio, J.; Tirabassi, G.; Masoller, C.; Barreiro, M. Analysis of spatio temporal geophysical data using spatial entropy: Application to comparison of SST datasets. Earth Syst. Dyn. Discuss. 2024, 2024, 1–14. [Google Scholar]
  43. Zanin, M. Forbidden patterns in financial time series. Chaos Interdiscip. J. Nonlinear Sci. 2008, 18, 013119. [Google Scholar] [CrossRef]
  44. Zunino, L.; Zanin, M.; Tabak, B.M.; Pérez, D.G.; Rosso, O.A. Forbidden patterns, permutation entropy and stock market inefficiency. Phys. A Stat. Mech. Its Appl. 2009, 388, 2854–2864. [Google Scholar] [CrossRef]
  45. Zhao, X.; Shang, P.; Wang, J. Measuring information interactions on the ordinal pattern of stock time series. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2013, 87, 022805. [Google Scholar] [CrossRef]
  46. Yin, Y.; Shang, P. Weighted multiscale permutation entropy of financial time series. Nonlinear Dyn. 2014, 78, 2921–2939. [Google Scholar] [CrossRef]
  47. Stosic, D.; Stosic, D.; Ludermir, T.B.; Stosic, T. Exploring disorder and complexity in the cryptocurrency space. Phys. A Stat. Mech. Its Appl. 2019, 525, 548–556. [Google Scholar] [CrossRef]
  48. Henry, M.; Judge, G. Permutation entropy and information recovery in nonlinear dynamic economic time series. Econometrics 2019, 7, 10. [Google Scholar] [CrossRef]
  49. Bandt, C. Order patterns, their variation and change points in financial time series and Brownian motion. Stat. Pap. 2020, 61, 1565–1588. [Google Scholar] [CrossRef]
  50. Kozak, J.; Kania, K.; Juszczuk, P. Permutation entropy as a measure of information gain/loss in the different symbolic descriptions of financial data. Entropy 2020, 22, 330. [Google Scholar] [CrossRef] [PubMed]
  51. Bian, C.; Qin, C.; Ma, Q.D.; Shen, Q. Modified permutation-entropy analysis of heartbeat dynamics. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2012, 85, 021906. [Google Scholar] [CrossRef]
  52. Fadlallah, B.; Chen, B.; Keil, A.; Príncipe, J. Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2013, 87, 022911. [Google Scholar] [CrossRef]
  53. Azami, H.; Escudero, J. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation. Comput. Methods Programs Biomed. 2016, 128, 40–51. [Google Scholar] [CrossRef]
  54. Chen, Z.; Li, Y.; Liang, H.; Yu, J. Improved permutation entropy for measuring complexity of time series under noisy condition. Complexity 2019, 2019, 1403829. [Google Scholar] [CrossRef]
  55. Zanin, M. Continuous ordinal patterns: Creating a bridge between ordinal analysis and deep learning. Chaos Interdiscip. J. Nonlinear Sci. 2023, 33, 033114. [Google Scholar] [CrossRef]
  56. Politi, A. Quantifying the dynamical complexity of chaotic time series. Phys. Rev. Lett. 2017, 118, 144101. [Google Scholar] [CrossRef]
  57. Watt, S.J.; Politi, A. Permutation entropy revisited. Chaos Solitons Fractals 2019, 120, 95–99. [Google Scholar] [CrossRef]
  58. Rényi, A. On the dimension and entropy of probability distributions. Acta Math. Acad. Sci. Hung. 1959, 10, 193–215. [Google Scholar] [CrossRef]
  59. Kolmogorov, A.N. Entropy per unit time as a metric invariant of automorphisms. Dokl. Russ. Acad. Sci. 1959, 124, 754–755. [Google Scholar]
  60. Sinai, Y.G. On the notion of entropy of a dynamical system. Dokl. Russ. Acad. Sci. 1959, 124, 768–771. [Google Scholar]
  61. Rényi, A. On measures of entropy and information. In Proceedings of the fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Berkeley, CA, USA, 1961; Volume 4, pp. 547–562. [Google Scholar]
  62. May, R.M. Simple mathematical models with very complicated dynamics. Nature 1976, 261, 459–467. [Google Scholar] [CrossRef]
  63. Kaneko, K. Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements. Phys. D Nonlinear Phenom. 1990, 41, 137–172. [Google Scholar] [CrossRef]
  64. L’Her, A.; Amil, P.; Rubido, N.; Marti, A.C.; Cabeza, C. Electronically-implemented coupled logistic maps. Eur. Phys. J. B 2016, 89, 81. [Google Scholar] [CrossRef]
  65. Hénon, M. A two-dimensional mapping with a strange attractor. In The Theory of Chaotic Attractors; Springer: New York, NY, USA, 2004; pp. 94–102. [Google Scholar]
  66. National Research Council and Division on Earth and Life Studies and Institute for Laboratory Animal Research and Committee for the Update of the Guide for the Care and Use of Laboratory Animals. Guide for the Care and Use of Laboratory Animals, 8th ed.; National Academies Press: Washington, DC, USA, 2010. [Google Scholar]
  67. Jumar, R.; Maaß, H.; Schäfer, B.; Gorjão, L.R.; Hagenmeyer, V. Database of power grid frequency measurements. arXiv 2020, arXiv:2006.01771. [Google Scholar]
  68. Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov exponents from a time series. Phys. D Nonlinear Phenom. 1985, 16, 285–317. [Google Scholar] [CrossRef]
  69. Zhusubaliyev, Z.T.; Rudakov, V.N.; Soukhoterin, E.A.; Mosekilde, E. Bifurcation analysis of the Henon map. Discret. Dyn. Nat. Soc. 2000, 5, 203–221. [Google Scholar] [CrossRef]
  70. Takens, F. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence, Warwick 1980: Proceedings of a Symposium, University of Warwick, Conventry, UK, 1979/80; Springer: Berlin/Heidelberg, Germany, 2006; pp. 366–381. [Google Scholar]
  71. Machowski, J.; Bialek, J.W.; Bialek, J.; Bumby, J.R. Power System Dynamics and Stability; John Wiley & Sons: Hoboken, NJ, USA, 1997. [Google Scholar]
  72. Tyloo, M.; Pagnier, L.; Jacquod, P. The key player problem in complex oscillator networks and electric power grids: Resistance centralities identify local vulnerabilities. Sci. Adv. 2019, 5, eaaw8359. [Google Scholar] [CrossRef] [PubMed]
  73. Bandt, C.; Keller, G.; Pompe, B. Entropy of interval maps via permutations. Nonlinearity 2002, 15, 1595. [Google Scholar] [CrossRef]
  74. Amigó, J.M.; Kennel, M.B.; Kocarev, L. The permutation entropy rate equals the metric entropy rate for ergodic information sources and ergodic dynamical systems. Phys. D Nonlinear Phenom. 2005, 210, 77–95. [Google Scholar] [CrossRef]
  75. Kargarnovin, S.; Hernandez, C.; Farahani, F.V.; Karwowski, W. Evidence of chaos in electroencephalogram signatures of human performance: A systematic review. Brain Sci. 2023, 13, 813. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example of the variability of EEG signal magnitudes within each ordinal pattern (OP). The EEG signal is from the right primary motor cortex (rM1) of a representative rat during active wakefulness and the OPs are constructed using an embedding dimension D = 3 and delay τ = 1 . The top three left panels show the values of the standard deviation of the EEG signal σ j ( α ) in the components ( j = 1 , 2 , 3 ) of the embedded vectors for each OP symbol ( α = 1 , , 6 ). The bottom left panel shows the OP probability distribution { P ( α ) } α = 1 D ! = { P ( 1 ) , , P ( D ! = 6 ) } . The right panel shows the mean (red squares) with respect to { P ( α ) } α = 1 6 for each set of log 2 σ j ( α ) values, log 2 σ j , with error bars defined by ± log 2 σ j 2 log 2 σ j 2 .
Figure 1. Example of the variability of EEG signal magnitudes within each ordinal pattern (OP). The EEG signal is from the right primary motor cortex (rM1) of a representative rat during active wakefulness and the OPs are constructed using an embedding dimension D = 3 and delay τ = 1 . The top three left panels show the values of the standard deviation of the EEG signal σ j ( α ) in the components ( j = 1 , 2 , 3 ) of the embedded vectors for each OP symbol ( α = 1 , , 6 ). The bottom left panel shows the OP probability distribution { P ( α ) } α = 1 D ! = { P ( 1 ) , , P ( D ! = 6 ) } . The right panel shows the mean (red squares) with respect to { P ( α ) } α = 1 6 for each set of log 2 σ j ( α ) values, log 2 σ j , with error bars defined by ± log 2 σ j 2 log 2 σ j 2 .
Entropy 27 00840 g001
Figure 2. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from two coupled identical logistic maps. The map iterates for the OP encoding are obtained from Equation (1), where the coupling strength ε is set to 0.01 or 0.2 and the map parameter r is set to 3.6 (red symbols), 3.75 (black symbols), 3.8 (cyan symbols), or 3.9 (yellow symbols). We use D = 4 and τ = 1 for the OP encoding of the iterates of the x component (one map)—see Section 2 for details.
Figure 2. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from two coupled identical logistic maps. The map iterates for the OP encoding are obtained from Equation (1), where the coupling strength ε is set to 0.01 or 0.2 and the map parameter r is set to 3.6 (red symbols), 3.75 (black symbols), 3.8 (cyan symbols), or 3.9 (yellow symbols). We use D = 4 and τ = 1 for the OP encoding of the iterates of the x component (one map)—see Section 2 for details.
Entropy 27 00840 g002
Figure 3. Bifurcation-like diagrams for coupled identical logistic maps. The left [right] panel shows the signal of one map, x t (Equation (1)), when ε = 0.01 [ ε = 0.2 ] as r is changed according to the values used in Figure 2.
Figure 3. Bifurcation-like diagrams for coupled identical logistic maps. The left [right] panel shows the signal of one map, x t (Equation (1)), when ε = 0.01 [ ε = 0.2 ] as r is changed according to the values used in Figure 2.
Entropy 27 00840 g003
Figure 4. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from a Hénon map. The map iterates are obtained from Equation (2) with b = 0.3 and the other parameter set to either a = 1.15 (red circle), 1.20 (black square), 1.34 (cyan diamond), 1.35 (yellow triangle), 1.40 (black asterisk), or 1.405 (green triangle). We use D = 4 and τ = 1 for the OP encoding of the iterates of the x component (as in Figure 2)—see Section 2 for details.
Figure 4. Rényi min-entropy and average magnitude variability of the ordinal pattern (OP) embedded vectors from a Hénon map. The map iterates are obtained from Equation (2) with b = 0.3 and the other parameter set to either a = 1.15 (red circle), 1.20 (black square), 1.34 (cyan diamond), 1.35 (yellow triangle), 1.40 (black asterisk), or 1.405 (green triangle). We use D = 4 and τ = 1 for the OP encoding of the iterates of the x component (as in Figure 2)—see Section 2 for details.
Entropy 27 00840 g004
Figure 5. Average magnitude variability of the ordinal pattern sequences from two coupled identical logistic maps as a function of the embedding dimension D and noise strength. Black squares correspond to noiseless iterates, cyan diamonds to observational noise with a standard deviation of 10 1 , and red circles to observational noise with a standard deviation of 10 0 . The map parameters for all symbols are set such that r = 3.6 and ε = 0.01 (Equation (1)). Two reference lines are included with slopes of 2 (orange) and 1 / 2 (red).
Figure 5. Average magnitude variability of the ordinal pattern sequences from two coupled identical logistic maps as a function of the embedding dimension D and noise strength. Black squares correspond to noiseless iterates, cyan diamonds to observational noise with a standard deviation of 10 1 , and red circles to observational noise with a standard deviation of 10 0 . The map parameters for all symbols are set such that r = 3.6 and ε = 0.01 (Equation (1)). Two reference lines are included with slopes of 2 (orange) and 1 / 2 (red).
Entropy 27 00840 g005
Figure 6. Average magnitude variability of the ordinal pattern (OP) sequences from a Hénon map as a function of the OP embedding dimension and noise strength. Filled symbols have the same observational noise as in Figure 5. The map parameters of the Hénon map are b = 0.3 and a = 1.15 (Equation (2)). Two reference lines are included with slopes of 2 (orange) and 1 / 2 (red).
Figure 6. Average magnitude variability of the ordinal pattern (OP) sequences from a Hénon map as a function of the OP embedding dimension and noise strength. Filled symbols have the same observational noise as in Figure 5. The map parameters of the Hénon map are b = 0.3 and a = 1.15 (Equation (2)). Two reference lines are included with slopes of 2 (orange) and 1 / 2 (red).
Entropy 27 00840 g006
Figure 7. OPs analysis for EEG recordings from 11 rats in three sleep–wake states: active wakefulness (AW), rapid eye movement (REM) and non-REM (NREM) sleep. (A) Location of the electrodes for the EEG recordings corresponding to the right olfactory bulb (OB), and the primary motor (M1) and somatosensory (S1) cortices. (BD) Rényi entropy vs. average magnitude variability, respectively, for BO, M1, and S1 electrodes. The embedding dimension for the OPs is D = 5 .
Figure 7. OPs analysis for EEG recordings from 11 rats in three sleep–wake states: active wakefulness (AW), rapid eye movement (REM) and non-REM (NREM) sleep. (A) Location of the electrodes for the EEG recordings corresponding to the right olfactory bulb (OB), and the primary motor (M1) and somatosensory (S1) cortices. (BD) Rényi entropy vs. average magnitude variability, respectively, for BO, M1, and S1 electrodes. The embedding dimension for the OPs is D = 5 .
Entropy 27 00840 g007
Figure 8. OPs analysis for GPS-synchronised grid frequency recording from 4 different locations in the European synchronous electric power grid: Karlsruhe (KA), Oldenburg (OL), Lisbon (PT), Istanbul (TU). Rényi min-entropy vs. average magnitude variability for the different locations and five different 24 h periods of recordings (respectively corresponding to the five different symbols). The embedding dimension for the OPs is D = 5 .
Figure 8. OPs analysis for GPS-synchronised grid frequency recording from 4 different locations in the European synchronous electric power grid: Karlsruhe (KA), Oldenburg (OL), Lisbon (PT), Istanbul (TU). Rényi min-entropy vs. average magnitude variability for the different locations and five different 24 h periods of recordings (respectively corresponding to the five different symbols). The embedding dimension for the OPs is D = 5 .
Entropy 27 00840 g008
Table 1. Maximum Lyapunov exponent λ (MLE) of the coupled logistic maps for the parameters in Figure 2. These are found by using Wolf’s et al. method [68], using an embedding dimension of 2, a time delay of 1, a maximum number of iterations of 10 to track for divergence (which avoided saturation), and a 10 % minimum separation to avoid temporal neighbours.
Table 1. Maximum Lyapunov exponent λ (MLE) of the coupled logistic maps for the parameters in Figure 2. These are found by using Wolf’s et al. method [68], using an embedding dimension of 2, a time delay of 1, a maximum number of iterations of 10 to track for divergence (which avoided saturation), and a 10 % minimum separation to avoid temporal neighbours.
( r , ε ) ( 3.6 , 0.01 ) ( 3.75 , 0.01 ) ( 3.8 , 0.01 ) ( 3.9 , 0.01 )
λ 0.16 0.39 0.42 0.45
( r , ε ) ( 3.6 , 0.2 ) ( 3.75 , 0.2 ) ( 3.8 , 0.2 ) ( 3.9 , 0.2 )
λ 0.18 0.36 0.43 0.49
Table 2. Maximum Lyapunov exponent λ (MLE) of the Hénon map for the parameters in Figure 4. These are found as shown in Table 1.
Table 2. Maximum Lyapunov exponent λ (MLE) of the Hénon map for the parameters in Figure 4. These are found as shown in Table 1.
a 1.15 1.2 1.34 1.35 1.4 1.405
λ 0.27 0.30 0.36 0.37 0.42 0.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tyloo, M.; González, J.; Rubido, N. Including the Magnitude Variability of a Signal in the Ordinal Pattern Analysis. Entropy 2025, 27, 840. https://doi.org/10.3390/e27080840

AMA Style

Tyloo M, González J, Rubido N. Including the Magnitude Variability of a Signal in the Ordinal Pattern Analysis. Entropy. 2025; 27(8):840. https://doi.org/10.3390/e27080840

Chicago/Turabian Style

Tyloo, Melvyn, Joaquín González, and Nicolás Rubido. 2025. "Including the Magnitude Variability of a Signal in the Ordinal Pattern Analysis" Entropy 27, no. 8: 840. https://doi.org/10.3390/e27080840

APA Style

Tyloo, M., González, J., & Rubido, N. (2025). Including the Magnitude Variability of a Signal in the Ordinal Pattern Analysis. Entropy, 27(8), 840. https://doi.org/10.3390/e27080840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop