Next Article in Journal
Scator Holomorphic Functions
Previous Article in Journal
A Novel Family of CDF Estimators Under PPS Sampling: Computational, Theoretical, and Applied Perspectives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Classification Results of Slope Entropy Using Downsampling Schemes

by
Vicent Moltó-Gallego
1,
David Cuesta-Frau
1,2,* and
Mahdy Kouka
1
1
Department of System Informatics and Computers, Universitat Politècnica de València, Campus d’Alcoi, 03801 Alcoy, Spain
2
Technological Institute of Informatics, Universitat Politècnica de València, 03801 Alcoy, Spain
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(11), 797; https://doi.org/10.3390/axioms14110797
Submission received: 1 October 2025 / Revised: 23 October 2025 / Accepted: 27 October 2025 / Published: 29 October 2025

Abstract

Entropy calculation provides meaningful insight into the dynamics and complexity of temporal signals, playing a crucial role in classification tasks. These measures are able to describe intrinsic characteristics of temporal series, such as regularity, complexity or predictability. Depending on the characteristics of the signal under study, the performance of entropy as a feature for classification may vary, and not any kind of entropy calculation technique may be suitable for that specific signal. Therefore, we aim to increase entropy’s classification accuracy performance, specially in the case of Slope Entropy (SlpEn), by enhancing the information content of the patterns present in the data before calculating the entropy, with downsampling techniques. More specifically, we will be using both uniform downsampling (UDS) and non-uniform downsampling techniques. In the case of non-uniform downsapling, the technique used is known as Trace Segmentation (TS), which is a non-uniform downsampling scheme that is able to enhance the most prominent patterns present in a temporal series while discarding the less relevant ones. SlpEn is a novel method recently proposed in the field of time series entropy estimation that in general outperforms other methods in classification tasks. We will combine it both with TS or UDS. In addition, since both techniques reduce the number of samples that the entropy will be calculated on, it can significantly decrease the computation time. In this work, we apply TS or UDS to the data before calculating SlpEn to assess how downsampling can impact the behaviour of SlpEn in terms of performance and computational cost, experimenting on different kinds of datasets. In addition, we carry out a comparison between SlpEn and one of the most commonly used entropy calculation methods: Permutation Entropy (PE). Results show that both uniform and non-uniform downsampling are able to enhance the performance of both SlpEn and PE when used as the only features in classification tasks, gaining up to 13% and 22% in terms of accuracy, respectively, when using TS and up to 10% and 21% when using UDS. In addition, when downsampling to 50% of the original data, we obtain a speedup around ×2 with individual entropy calculations, while, when incorporating the downsampling algorithms into time count, speedups with UDS are between ×1.2 and ×1.7, depending on the dataset. With TS, these speedups are above ×2, while maintaining accuracy levels similar to those obtained when using the 100% of the original data. Our findings suggest that most temporal series, specially medical ones, have been measured using a sampling frequency above the optimal threshold, thus obtaining unnecessary information for classification tasks, which is then discarded when performing downsampling. Downsampling techniques are potentially beneficial to any kind of entropy calculation technique, not only those used in the paper. It is able to enhance entropy’s performance in classification tasks while reducing its computation time, thus resulting in a win-win situation. We recommend to downsample to percentages between 20% and 45% of the original data to obtain the best results in terms of accuracy in classification tasks.

1. Introduction

The concept of entropy originates in information theory and statistical mechanics as a quantitative measure of uncertainty or information content in a system. Given a discrete probability function P = p 1 , p 2 , , p N , the Shannon Entropy [1] is defined as:
H ( P ) = i = 1 N p i log p i
where p i 0 and i = 1 N p i = 1 . Entropy reaches its maximum when all outcomes are equally likely ( p i = 1 N ), and its minimum when one outcome has its probability equal to 1 (i.e., full determinism). In the context of time series analysis, entropy-based methods quantify the degree of regularity, complexity, or predictability of a temporal signal x t t = 1 T . A low entropy value indicates highly regular or deterministic dynamics, while high entropy suggests complex, stochastic, or chaotic behavior. Most entropy methods rely on computing its value with the mentioned Shannon expression, but there are many variations where the estimator uses Rényi Entropy [2] (Equation (2)) or Tsallis Entropy [3] (Equation (3)). These frameworks underpin the mathematical foundation of entropy as measures of uncertainty, diversity, or disorder, concepts that Slope Entropy adapts to geometric and temporal features of real-world time series.
H α ( P ) = 1 1 α log ( i p i α ) ; α > 0 , α 1
H q ( P ) = 1 q 1 ( 1 i p i q )
The patterns that entropy-based methods identify correspond to recurring motifs or dynamical signatures within the time series. Mathematically, entropy quantifies the dispersion or uniformity of the pattern distribution in the reconstructed state space. Let Ω denote the set of all admissible patterns, and let p ( ω ) be the empirical probability of observing pattern ω Ω . The entropy measures the heterogeneity of p ( ω ) :
H ( p ) = 0 ω 0 : p ( ω 0 ) = 1 , H ( p ) = log | Ω | p ( ω ) = 1 | Ω | ω
Thus, entropy-based analysis detects whether the system’s evolution is dominated by a few deterministic configurations or explores the available pattern space uniformly: it is a hallmark of stochastic or chaotic dynamics.
Entropy analysis for time series has been playing a crucial role in many scientific domains, such as economy and finances [4,5,6,7], biology and medicine [8,9,10,11] and engineering [12,13,14,15], among others. One of its main applications is using it as a feature for signal classification [16,17,18,19,20], since entropy calculation is able to provide very valuable insights into the predictability, dynamics and complexity of the studied signals. Nowadays there are many different entropy estimation methods, but the recently proposed Slope Entropy (SlpEn) shows promise in the field of signal classification, since it is able to capture the different intricacies of the patterns found in a signal, taking advantage of both the amplitude of the analysed temporal series and the symbolic representation of its intrinsic patterns.
SlpEn was introduced in 2019 by Cuesta-Frau [21]. It takes advantage of the slope between two data samples to estimate entropy in a Shannon entropy-fashion. Making use of two different thresholds, δ and γ , as well as the basic subsequence length m, it is able to transform the original time series into a symbolic representation of data subsequences (patterns) according to the differences between consecutive time series samples. The specifics will be described later in Section 2.2. Making use of the three mentioned parameters, as well as the slope between two data samples, SlpEn is allowed to be more customizable and refined when analyzing temporal series, although it requires more input parameters compared to other of the most commonly used entropy calculation methods, such as Permutation Entropy (PE) [22] (see Section 2.3 for further details).
The original SlpEn paper gives some guidelines regarding the selection of values for both δ and γ , proving it is well behaved under many different conditions. It has been successfully applied in other works, such as [23,24,25], achieving in most cases better results than other most commonly used entropy calculation methods like Permutation, Approximate or Sample Entropy [26]. In addition, there has been some research to study how to improve SlpEn’s performance by removing δ from its parameters [27] or taking an asymmetric approach with respect to the values of γ  [28]. SlpEn has also been successfully applied in combination with other entropy calculation methods [29], signal decomposition methods like VMD or CEEMDAN [30,31], optimization algorithms for choosing the best SlpEn parameters, like the snake optimizer [32], and also using different time scales [33].
In this paper, we aim to further improve SlpEn classification capabilities by combining it with downsampling techniques, such as Trace Segmentation (TS), which is a non-uniform downsampling scheme [34,35,36], but also with uniform downsampling (UDS). TS (detailed in Section 2.4) allows to reduce the size or length of the input temporal series, choosing the samples of the original series where the greatest variation between them occurs, thus enhancing the peaks of the signal at the cost of discarding less significant values. TS has already been successfully applied in combination with Sample Entropy [37], showing great results and inquiring in many advantages, including shorter computation time and better accuracy results in most cases. We hypothesize that these benefits can be applied to any entropy calculation method, not only to Sample Entropy. Additionally, in the case of SlpEn, we expect that the enhancement of obtained results will be higher than those obtained with other entropy calculation methods, since SlpEn bases its calculation on the differences between samples and TS is able to downsample a sequence choosing the samples where the biggest difference occurs, thus increasing differences among them.
Additionally, we also implemented UDS (Section 2.5), which allows us to downsample the series uniformly, as its name suggests, up to the desired number of samples. Since non-uniform downsampling has already been applied successfully to entropy-based classification methods, we will try to reproduce similar results with the uniform method, since it is highly correlated to effectively using temporal scales in entropy calculation [38]: using entropy with τ = 2 is equivalent to downsample to 50%, with τ = 3 , to downsampling to 33%, and so on.
Our main hypothesis for applying downsampling is that most features present in datasets and that are considered key in classification tasks belong in certain frequency bands that are below the sampling frequency, thus noise and confounding data is added to the temporal record and classification accuracy is hindered. To support it, we have chosen a comprehensive benchmark that is comprised of seven different datasets from different fields that belong to different domains. For further details on each one of the datasets that comprise our benchmark, refer to Section 2.1. In EEG recordings, as it is the case of the Bonn EEG and Bern-Barcelona datasets we use in the present work, the data is divided into bands, mainly five: Delta, Theta, Alpha, Beta and Gamma. Then, from these bands, one can extract components to perform classification tasks. Four of these bands (Delta, Theta, Alpha and Beta) are below 30 Hz, whereas the sampling frequencies for Bonn EEG and Bern-Barcelona are 173.61 Hz and 512 Hz respectively, which is way above 30 Hz, or even 60 Hz, which would be the minimal sampling frequency to capture all information from these bands while avoiding loss, according to the Nyquist-Shannon sampling theorem [39,40]. Only using such low frequency bands, classification tasks for different applications, such as epileptic seizures, can be performed with relative ease, such as demonstrated in the works [41,42,43,44]. With the Fantasia RR database, which is comprised of ECGs, something similar happens. The database is sampled at 250 Hz, but the main HRV frequency bands are the Low-Frequency (LF) band (0.04–0.15 Hz) and High-Frequency (HF) band (0.15–0.4 Hz) [45,46], even some references going up to approximately 0.9 Hz with respiration signals [47]. With the Paroxysmal Atrial Fibrillation (PAF) prediction dataset, which is also a ECG-comprised database, the exact same thing happens, its sampling rate is 128 Hz, way more than double of the minimum needed, which would be around 30 Hz if the HRV signal is band-pass filtered between 5 Hz and 15 Hz [48,49].
Outside the medical field, we have three more datasets. First, we have the House Twenty database, which contains electricity consumptions, sampled at 8-s interval, which would be a frequency of 0.125 Hz. Next, the Worms Two-Class dataset. This one contains recordings of mutant and non-mutant worms movements. These were sampled at 30 Hz. Finally we have the Ford A dataset, which measures engine noise. Up to our knowledge, there is no reference as how the noise data was captured nor any sampling rate, it is only said that it is “engine noise”, and that the dataset was used for a competition in a 2008 IEEE World Congress on Computational Intelligence. Note that, in these cases, we do not know the ideal frequency components to perform classification tasks, so we do not know the ideal sampling rate for each case neither. Regardless, we are interested to see if, what we think is applicable to medical EEG and ECG datasets, it is also feasible to non-medical datasets.
We have applied TS or UDS as a data processing technique before performing entropy calculation. Since these techniques allow to reduce the temporal sequence in size as much as we want, we have applied downsampling to obtain sequences downsampled from 10 % to 70 % in 1 % steps. Then, we calculate SlpEn on the downsampled sequences, comparing the results, in terms of classification accuracy, with the ones achieved with no downsampling or, in other words, the original time series without applying TS nor UDS. Such comprehensive experiment will allow to see if there is some specific percentage of downsampling more appropriate to improve classification accuracy. In addition, the same experiment has been performed with one of the most commonly used entropy calculation methods: Permutation Entropy. Comparing the results achieved by the two different entropies will show how each entropy, as well as TS and UDS, behave under different conditions. The experiments are explained in more detail in Section 3.1. In order to carry them out, we will make used of the benchmark mentioned previously. As it has many varied datasets, it will make it possible to evaluate the effectiveness of each method.
The main contribution of this paper is to show how downsampling techniques can be applied effectively in combination with not only SlpEn, but with any entropy calculation methodology in order to improve results in terms of classification accuracy. In addition, in most cases applying downsampling before calculating the entropy value of a temporal series will lead to faster computation times: calculating the downsampled series and then obtaining its entropy value is much cheaper, in terms of computational cost, than calculating the entropy value of the original non-downsampled sequence. To sum up: using downsampling will not only allow to enhance the results in classification tasks, but also to obtain them faster.
The organization of the paper is the following: Section 2, Materials and Methods, presents a comprehensive review of both the benchmark used in the experiments, having a look into each individual dataset, and the specifics of the four different methods used in the experiments: SlpEn, PE, TS and UDS. Next, in Section 3, the specifics of how the experiments have been executed are explained, while also presenting the results, paying attention to how TS and UDS affect the accuracy results achieved by both SlpEn and PE. In addition, the time of execution will also be taken into account to show the benefits gained in timing after applying downsampling. A discussion of the results will take place in Section 4, where the results are evaluated, as well as the implications of using downsampling to enhance the discriminating power of entropy when used as a feature for classification. Finally, the paper concludes in Section 5, where we summarize the main contributions of this paper and highlight potential future research directions which can enhance SlpEn’s capabilities, or other entropy techniques, in different contexts.

2. Materials and Methods

This section presents the benchmark utilized in the experiments, as well as the methodologies and techniques used. The benchmark is comprised of a total of seven different datasets, coming each one from a different background and captured for different purposes, although in this case all of them have been used in classification tasks. All of them are publicly available, and have been used in many different studies, making them very reputed representatives time series for analysis, allowing for easy comparison between the obtained results and previous ones in different studies. In addition, the specific datasets have been carefully selected so that they are diverse in their intrinsic characteristics, such as length, number of samples, background, etc., ensuring that the results have high potential for generalization and mitigating any possible bias, both in interpretation or regarding the values of the time series themselves.
Apart from the benchmark, this section will also focus on the insights of the techniques used in the experiments: entropy calculation methods, mainly Slope Entropy and Permutation Entropy, and the downsampling techniques, Trace Segmentation (TS) and Uniform Downsampling (UDS). We will review how both entropies work, as well as explaining how TS and UDS perform their downsampling. Combining TS or UDS with both types of entropy, and then comparing results with different levels of downsampling will be the main point of the experiments that have been carried out and that will be presented, along with the results, in Section 3.

2.1. Benchmark

As previously stated, in order to assess both the resilience and efficacy of the proposed combination of downsampling with SlpEn, it is imperative to conduct experiments on diverse datasets that exhibit several variations in many time series characteristics, such as level of ties, length or regularity. Thus, the specific datasets contained in the benchmark used in this present work are:
  • Bonn EEG dataset [50,51]. This dataset comprises 4097 electroencephalograms (EEGs), each with a duration of 23.6 s. The instances are categorized into five distinct classes (A, B, C, D, and E), representing different neural activity scenarios. Classes A and B correspond to healthy subjects with eyes open and closed respectively. Classes C, D, and E pertain to different classifications of epileptic subjects (see further details in [50]). For the specific experiments in the present paper, we focused only on classes D and E, with 50 records from each class. Class D corresponds to seizure-free periods at the epileptogenic zone, whilst seizure activity from the hippocampal focus pertains to class E. This dataset has been extensively used in numerous scientific works, being [10,52,53] examples of such research.
  • Bern–Barcelona EEG database [54]. This dataset includes both non-focal and focal time series extracted from seizure-free recordings of patients with pharmacoresistant focal-onset epilepsy. The classes have 427 and 433 records each, each record being sampled at a sampling frequency of 512 Hz and being comprised of 272 data points. It has been used in other classification studies, including the works [55,56,57], which reviewed the results achieved using time series from this database.
  • Fantasia RR database [58]. It showcases a carefully selected collection consisting of 40 distinct time series, divided into two groups of 20 records each, where one of these two groups corresponds to youthful subjects, whereas the other is comprised of data from mature subjects. All subjects were initially in good health, thus eliminating potential health-related variables. The monitoring duration lasted 120 min, and the sampling frequency was set at 250 Hz. This database has been utilized in various studies, including [59,60].
  • Ford A dataset [61]. It is a collection of data extracted from a specific automotive subsystem. The primary purpose of its creation was to empirically evaluate the effectiveness of classification schemes on the acoustic characteristics of engine noise. From this experimental project, a set of 40 distinct records was carefully chosen and utilized for analysis from each individual class. Examples of research where this dataset has been used are [62,63].
  • House Twenty dataset [64,65]. This dataset consists of temporal sequences originating from 40 different households as part of the Personalised Retrofit Decision Support Tools for UK Homes using Smart Home Technology (REFIT) project. This dataset includes two classes with 20 recordings in each, thus having the dataset data from 40 different households. One of the classes represents overall electricity consumption, while specific electrical consumption of washing machines and dryers is represented on the other. This dataset belongs to the UCR archive [66,67].
  • PAF (Paroxysmal Atrial Fibrillation) prediction dataset [68]. This dataset comprises discrete 5-min temporal recordings from patients diagnosed with PAF. The recordings are classified into two categories: one preceding the onset of a PAF episode and the other representing instances distant from any PAF manifestation. A total of 25 distinct files is included in each category. This dataset is widely known and used in many and varied scientific research [69,70,71].
  • Worms two-class dataset [72,73]. It is comprised of time series data from a certain species of worm. More specifically, it refers to locomotive patterns, which are used in behavioral genetics research. The records are selected from two classes: non-mutant and mutant worms. The first class consists of 76 records, while the second class has 105 records. Both classes share the time series length, which is 1800 samples. Similar to the other datasets, this one has been utilized in various scientific works [74,75].

2.2. Slope Entropy

SlpEn [21] is a method for calculating entropy by extracting symbolic subsequences through the application of thresholds to amplitude differences between consecutive samples of a time series. The resulting histogram of relative frequencies is then subjected to a Shannon entropy-like expression, allowing to obtain the final result, which is the SlpEn value. This method operates on an input time series x with input parameters N, m, γ and δ , aiming to compute SlpEn ( N , m , γ , δ ) .
The input time series x is considered as an N-length vector which contains the samples x i , and it is then defined as x = { x 0 , x 1 , x 2 , , x N 1 } , x i R , 0 i < N . The time series is divided iteratively into overlapping data epochs of length m, denoted as x j = { x j + 0 , x j + 1 , , x j + m 1 } , 0 j < N m + 1 , with j incremented after each iteration as j j + 1 .
For each subsequence, a symbolic pattern is generated, x j ψ j , where ψ j = { ψ 0 = f ( x j + 1 x j + 0 ) , ψ 1 = f ( x j + 2 x j + 1 ) , , ψ m 2 = f ( x j + m 1 x j + m 2 ) } . The symbols used in the pattern are selected from a set S = { + 2 , + 1 , 0 , 1 , 2 } , based on a thresholding function f. The function relies on two different thresholds, δ and γ . These two thresholds can take any positive real value with the restriction δ < γ , and the following rules (that apply for 0 k < m 1 ):
  • If x i > x i 1 + γ , +2 (or just 2) is the symbol be assigned to the current active symbolic string position, ψ j = ψ j + { + 2 } .
  • If x i > x i 1 + δ and x i x i 1 + γ , +1 (or just 1) is the symbol to be assigned to the current active symbolic string position, ψ j = ψ j + { + 1 } .
  • If | x i x i 1 | δ , 0 is the symbol to be assigned to the current active symbolic string position, ψ j = ψ j + { 0 } . This is the case when, depending on threshold δ , two consecutive values are very similar, which could be the case for ties [76].
  • If x i < x i 1 δ and x i x i 1 γ , 1 is the symbol to be assigned to the current active symbolic string position, ψ j = ψ j + { 1 } .
  • If x i < x i 1 γ , −2 is the symbol to be assigned to the current active symbolic string position, ψ j = ψ j + { 2 } .
Possible ties are handled with the use of the threshold δ  [21]. It also tries to avoid incorrect conclusions [77]. On the other hand, γ is used to differentiate between low and high consecutive sample gradients. In the standard configuration described in the SlpEn seminal paper [21], δ is recommended to take a small value, 0.001, while γ is typically a constant value close to 1.0, depending on the normalization of the input time series. These thresholds are applied to both negative and positive gradients just by changing the sign. This symmetrical baseline scheme has achieved very good results in classification tasks performed in many works, such as [24,29,78,79,80].
Figure 1 is a visual representation of how the regions defined by the two thresholds described above. This representation follows the standard symmetric SlpEn approach described in its original paper [21] and utilized in previous scientific studies, as well as in the experiments described in this paper.
After computing all the symbolic patterns, the histogram bin height is calculated by counting the total number of occurrences for each pattern, and it is then normalized by the number of unique different patterns found. These normalized values are referred to as p k . Finally, a Shannon entropy expression is used to obtain the SlpEn value for the temporal series x using the input parameters m, δ and γ :
SlpEn ( x , m , γ , δ ) = k p k log p k
All the steps needed to compute SlpEn are shown in Algorithm 1. Note that, contrary to the basic algorithm presented in [21], this one is optimized so that the patterns from the last subsequence are reused in the current one, instead of computing them all each time.
Algorithm 1 Slope Entropy (SlpEn) Algorithm
Input: 
Time series x , embedded dimension m > 2 , length N > m + 1 , δ , γ > δ
Initialisation: 
S l p E n = 0 , slope pattern counter vector c { } , slope patterns relative frequency vector p { } , list of slope patterns found Ψ m { }
  • for   i = 1 , , m 1  do
  •     if  ( x i x i 1 ) [ δ , δ ]  then  ψ 0 0  end if
  •     if  ( x i x i 1 ) ] δ , γ ]  then  ψ 0 1  end if
  •     if  ( x i x i 1 ) ] γ , [  then  ψ 0 2  end if
  •     if  ( x i x i 1 ) [ γ , δ [  then  ψ 0 1  end if
  •     if  ( x i x i 1 ) ] , γ [  then  ψ 0 2  end if
  • end for
  • Ψ m ψ 0
  • c 1
  • for   j = 1 , , n m   do
  •     for  i = 1 , , m 1  do
  •          ψ j ψ i j 1
  •     end for
  •     if  ( x i x i 1 ) [ δ , δ ]  then  ψ j 0  end if
  •     if  ( x i x i 1 ) ] δ , γ ]  then  ψ j 1  end if
  •     if  ( x i x i 1 ) ] γ , [  then  ψ j 2  end if
  •     if  ( x i x i 1 ) [ γ , δ [  then  ψ j 1  end if
  •     if  ( x i x i 1 ) ] , γ [  then  ψ j 2  end if
  •      b F o u n d = F a l s e
  •     for  i = 0 , , s i z e O f ( Ψ m ) 1  do
  •         if  ψ j = Ψ i m  then
  •             c i = c i + 1
  •             b F o u n d = T r u e
  •            break
  •         end if
  •     end for
  •     if not  b F o u n d  then
  •          Ψ m ψ j
  •          c 1
  •     end if
  • end for
  • for  i = 0 , , s i z e O f ( Ψ m ) 1   do
  •      p i = c i s i z e O f ( Ψ m )
  •      p p i
  •      S l p E n = S l p E n + ( p i log p i )
  • end for
  • return   S l p E n

2.3. Permutation Entropy

PE [22] is an entropy calculation method based on deriving an ordinal symbolic representation based on the amplitude of the input time series and the application of the Shannon entropy expression to their relative frequencies. It is one of the most popular methods for calculating entropy, along with its many variations, mainly for its simplicity, low computation time and excellent performance in classification applications [81,82,83,84,85]. In this study, this method has been included for comparison purposes, as well as to prove that TS is well suited for any kind of entropy, not only to SlpEn.
This procedure divides a time series x of length N into overlapping subsequences of length m. An ordering procedure of the samples, generally in ascending order, is performed for each sample position j of length m, x j m .
As a result from ordering the indices of the samples in x j m , which by default were ordered as { 0 , 1 , , m 1 } , a new symbolic pattern is obtained with the indices of each sample at their corresponding ordered position. This symbolic vector is represented as π j m = { π 0 , π 1 , , π m 1 } , where π 0 is the original index of the sample which is the smallest of x j m , π 1 the index of the next sample in ascending order, and so on. To put it in another way, the samples in x j m satisfy x j + π 0 x j + π 1 x j + π 2 x j + π m 1 .
Once all the patterns have been obtained, a histogram is calculated using all the patterns that have been found as bins and the number of times they have appeared as their value or height. Then, their relative frequencies, p j , are extracted using the number of occurrences of each found pattern and the total number of possible ordinal patterns m ! . Finally, these relative frequencies are used to obtain their Shannon entropy, corresponding in this case to their PE:
PE ( x , m ) = k = 0 m ! 1 p k log p k , p k > 0
The basic steps for computing Permutation Entropy are detailed in Algorithm 2.

2.4. Trace Segmentation

TS is a non-uniform downsampling scheme that samples the input signal at the points where the greatest variation occurs. This technique has been successfully used in other research, such as [76,86,87].
Mathematically, TS works in the following way. Given an input time series x = { x 0 , x 1 , x 2 , , x N 1 } , an accumulative derivative is obtained as:
TS k = j = 1 k | x j x j 1 |
where k [ 1 , N 1 ] and TS 0 = 0 . The last point, TS N 1 , corresponds to the maximum value of the accumulative derivative, and one can obtain the sampling intervals’ amplitude Δ as Δ = TS N 1 N 1 , being N the number of desired output samples, with N < N . Each sampling point for the output signal x is provided by the minimum index i of x for which TS i exceeds an integer multiple q of Δ :
x q = x i | i = argmax 1 q N ( TS i q Δ )
with x 0 = x 0 . The main objective of using TS is to reinforce the presence of peaks in a non-linear way. This may result to be very beneficial in the case of SlpEn, since it takes advantage of the difference between samples and TS transforms the input signal to keep only the most prominent patterns. Algorithm 3 shows how to compute TS.
Algorithm 2 Permutation Entropy (PE) Algorithm
Input: 
Time series x , embedded dimension m > 2 , length N > m + 1
Initialisation: 
P E = 0 , ordinal pattern counter vector c { } , ordinal patterns relative frequency vector p { } , list of ordinal patterns found Π m { }
  • for   j = 1 , , n m   do
  •     for  i = 0 , , m 1  do
  •          y j x j + i
  •          π j i
  •     end for
  •      b S o r t e d = F a l s e
  •     while  b S o r t e d = F a l s e  do
  •          b S o r t e d = T r u e
  •         for  i = 0 , , m 2  do
  •            if  y i j > y i + 1 j  then
  •                 s w a p ( y i j , y i + 1 j )
  •                 s w a p ( π i j , π i + 1 j )
  •                 b S o r t e d = F a l s e
  •            end if
  •         end for
  •     end while
  •      b F o u n d = F r u e
  •     for  i = 0 , , s i z e O f ( Π m ) 1  do
  •         if  π j = Π i m  then
  •             c i = c i + 1
  •             b F o u n d = T r u e
  •            break
  •         end if
  •     end for
  •     if not  b F o u n d  then
  •          Π m π j
  •          c 1
  •     end if
  • end for
  • for   i = 0 , , s i z e O f ( Π m ) 1   do
  •      p i = c i N m + 1
  •      p p i
  •      P E = P E + ( p i log p i )
  • end for
  • return  P E
Algorithm 3 Trace Segmentation (TS) Algorithm
Input: 
Time series x , length N, desired number of samples N < N
Initialisation: 
x { }
  • k = 0
  • for  i = 1 , , N 1   do
  •      k = k + | x i x i 1 |
  • end for
  • Δ = k N 1
  • x x 0
  • k = 0
  • j = 1
  • for  i = 1 , , N 1   do
  •      k = k + | x i x i 1 |
  •     if  k Δ j  then
  •          x x i
  •          j = j + 1
  •     end if
  • end for
  • x x N 1
  • return  x

2.5. Uniform Downsampling

UDS is a basic uniform downsampling technique that allows to reduce the number of samples up to the desired number, removing elements in a uniform way. That is, in the case we want to remove half of the samples, we remove 1 out of 2 consecutive samples (we are using a downsampling factor of 2), removing values across all the series. If we wanted to remove 1/3 of the samples (downsampling to 67% or, said in other words, use a downsampling factor of 1.5 since we are keeping 2/3 of the samples), we would remove 1 out of 3 consecutive samples, removing always the sample in the same position if no overlapping window is used. As an example, with the sequence (3, 9, −7, 0, −34, 5), if we wanted to downsample to 67%, we would remove the pairs of numbers 3 and 0, 9 and −34 or −7 and 5. Note that these examples remove elements in the position of the chosen integer number, M. This is commonly known as integer decimation. Equation (9) depicts how the elements for integer decimation are chosen:
x i = x i M
where i = 0 , 1 , 2 , , N 1 M , so that the inequality i M N 1 is satisfied.
In the uniform downsampling performed on the experiments in this work, we are downsampling up to a desired number of samples, which is always smaller to the original sequence size, and then translating this to a percentage. This makes it fundamentally different to previously mentioned integer decimation using integer M, since most percentages won’t be equivalent to just keeping the Mth sample of a given temporal series, they will be fractional numbers. It is worh noting the integer decimation is a special case of the UDS we have used and implemented. Equation (10) depicts how the indices are chosen in this case:
x i = x i N 1 N 1
where i = 0 , 1 , 2 , , N 1 .
To implement how we will be choosing each sample, we will be using an accumulator. This accumulator, c, will be adding to itself the fraction of samples we want to keep, N , with respect to the series size N: p = N N . This addition, c = c + p , will take place each time we evaluate whether a certain sample is to be added to the resulting sequence of not. Whenever the accumulator exceeds 1, we will keep this sample, and then reset it by removing 1 from the accumulator, thus keeping the excess for the next iteration where we pick the next sample. Algorithm 4 depicts the algorithm used to perform UDS:
Algorithm 4 Uniform Downsampling (UDS) Algorithm
Input: 
Time series x , length N, desired number of samples N < N
Initialisation: 
x { }
  • p = N N
  • c = 0
  • x x 0
  • for   i = 1 , , N 1   do
  •      c = c + p
  •     if  c > 1  then
  •          x x i
  •          c = c 1
  •     end if
  • end for
  • x x N 1
  • return  x
Note that the downsampling factor ( D F ) can be inferred from the algorithm, mainly by dividing the desired number of samples N by the total number of samples N : D F = N N . In this work, instead of specifying certain downsampling factors, we say we “are downsampling to X % ”, that is, we are keeping X % of the samples. To obtain the downsampling factor, one must simply divide 1 by the percentage of kept samples and multiply by 100: ( 1 / X ) 100 .

3. Experiments and Results

This section is divided into two parts. The first one will present all the experiments that have been carried out, going into detail about what has been recorded and under which conditions. In addition, the second part will focus on the results obtained from experimentation, briefly commenting them. These results will be further discussed in Section 4.

3.1. Experiments

The experiments were devised to assess how SlpEn can benefit in terms of accuracy in classification tasks from the pattern reinforcement provided by processing the temporal series under analysis with TS. In addition, the same experiments carried out for SlpEn have also been implemented with PE, not only for comparison purposes, but also to show that TS can be beneficial for any kind of entropy whenever used in classification tasks. Figure 2 depicts a flow diagram of the conducted experiments, detailing step by step how they have been executed. Moreover, we performed again the experiments substituting TS with UDS in order to asses how uniform dowsampling performs in combination with entropy calculation.
In favour of finding the best combination of parameters for SlpEn and PE, we conducted a grid search. For SlpEn, three parameters must be explored: m, δ and γ . In order to meet the temporal demands of the experiments, we opted for omitting δ from the grid search, keeping its value fixed in 0.001, as suggested by SlpEn’s seminal paper [21]. Regarding the other two parameters, m has taken values between 3 and 9 (with a step of 1), and γ has been explored between 0.1 and 1 with steps of 0.1. In the case of PE, the only required parameter is m, so the grid search only explored that parameter, making it vary from 3 to 9 once again. To keep the experiments simple, and to avoid taking downsampling even further, we discarded using multiple temporal scales, thus maintaining τ = 1 in both entropy calculations.
In addition, the downsampling schemes have also been searched in a grid search manner. Since both techniques allow to downsample up to whatever value the user needs, we have performed downsampling ranging from 10% of the original input data up to 70%, in 1% steps. We have chosen these two boundaries (10% and 70%) mainly for two reasons. The first and most important one, is to try as many relevant combinations as we can. We chose 10% as the lower bound because percentages lower than this one might output a time series with too few samples, thus making it unrepresentative of reality. The upper bound, 70%, was chosen as such to avoid making it too high, since we would lose one of the main benefits of downsampling, which is reducing computation time. Finally, we chose to perform 1% steps to try as many percentages as we could, without using decimal steps to keep things simple. In addition, we have also a second reason, which involves the time consumption of the experiment. We are performing a grid search to find the optimal combination of parameters both for TS/UDS and SlpEn/PE, so it will be quite time consuming.
Then, both TS/UDS and SlpEn, and TS/UDS and PE are combined: first TS or UDS are applied with a certain downsampling percentage and then the grid search for parameter optimization of the corresponding entropy is conducted. This is repeated until all the possible combinations are explored, recording all the results in terms of accuracy, as well as recording how long it takes to calculate, in seconds, for each combination to finish.
Finally, both SlpEn and PE are calculated directly over the original sequences, or, said in other words, with no downsampling. Its purpose is twofold: first, demonstrate that downsampling enhances accuracy results, and second to show the calculation speed improvements allowed by the use of these techniques.
Classification accuracy is the main measure used to assess the performance of the methods tested. Additionally, we also used speed improvements, as mentioned previously. That accuracy corresponds to the percentage of correctly classified time series based on TS/UDS+SlpEn, SlpEn, TS/UDS+PE or PE as features individually, for each one of the seven experimental datasets found in the benchmark [28,88].

3.2. Results

The data shown in Table 1, Table 2 and Table 3 corresponds to the best results achieved using the grid search approach for parameter optimization mentioned previously. Table 1 and Table 2 show classification accuracy and the gain (or enhancement) obtained when using TS compared to the case where no TS is applied, for SlpEn and PE respectively. This enhancement can be obtained by the difference between the accuracy obtained when the entropy methods (PE or SlpEn) is used by itself and the accuracy obtained when the entropy is used in combination of any downsampling methods (TS or UDS) (Enhancement = SlpEn/PE without TS/UDS − SlpEn/PE with TS/UDS). They also show the percentage of the downsampling needed using TS to achieve those numbers. As an example, if TS% is 5%, it means that only 5% of the total amount of samples provided by the original sequence were used to calculate its corresponding entropy value. In the case of a draw, that is, the accuracy achieved is the same for two or more different TS%, the lowest one is the one that has been reported. Additionally, Table 3 shows the optimal parameter configuration which has achieved the results shown in the previous tables.
Table 4, Table 5 and Table 6 show the same results, but using UDS instead of TS.
Table 7 (SlpEn) and Table 8 (PE) show the computation time when using TS and UDS, compared again to the case where no downsampling is applied. In this case, they refer to the total time dedicated to perform the grid search to optimize the parameters, using the indicated percentage of samples (100% corresponds to no downsampling). It’s important to highlight that it takes much longer to perform the search for SlpEn than PE. The reason for this is that it has two parameters to optimize, m and γ , whereas PE only has m. In addition, the differences between each dataset, specifically the number of samples and the length of each one, will also impact the time it takes to execute. Note that the numbers are only for reference, since they are directly impacted by the specifics of the machine and the programming language used. In this case, they have been executed with the programming language Python, in a MacBook Air 2024 with macOS Sequoia 15.6.1 as operative system, Apple M3 with 8 cores and 8 GB of RAM.
Table 9 and Table 10 correspond to the time it takes to obtain individual values of entropy, for SlpEn and PE respectively. The purpose of these tables is to show the impact TS or UDS have when calculating entropy values with different values of m. As one would expect, with higher m values, the higher the computing time required. The values are representative of the Bonn EEG dataset. In the case of the other datasets, we have included them in Appendix A.1 for the case of SlpEn and in Appendix A.2 for PE. We did not diferentiate between UDS and TS in this tables, since their execution time is not included in the individual entropy calculations.
Figure 3 and Figure 4 illustrate the maximum values of accuracy attained in each downsampling percentage (TS% and UDS%), comparing the curves given by SlpEn and PE for each different dataset. It includes values for all the different downsampling percentages that have been applied in the grid search, that is, the percentage of samples of the original sequence that have been used in the calculation of the entropy values. The last value of the plot, 100%, refers to the case where there is no downsampling or, in other words, the best value obtained by using just the entropy methods without applying any downsampling method.
Finally, Figure 5, Figure 6 and Figure 7 show heatmaps that represent the accuracy of SlpEn combined with TS and PE combined with UDS for all the downsampling percentages tested, that is, from 10% to 70%. Again, the instance where no downsampling is applied (100, bottom row) is also included. Representations for each dataset are shown. The first two images correspond to SlpEn, comparing the TS% against different m and γ values, respectively. The third figure shows PE’s accuracy results, confronting the values of UDS% and m. The cases where PE and TS and SlpEn and UDS are combined show similar results, they have been included in Appendix B. In order to elaborate the heatmaps, we haven chosen three variables for each: accuracy, m or γ and the downsampling percentage. We have represented these variables in three dimensions, where the third dimension, which corresponds to the accuracy, is then colored according to how high the accuracy has been achieved with the combination of the other two parameters.

4. Discussion

The original SlpEn method [21] by itself already has a very high performance in classification tasks. Its main drawback is the optimization of its parameters: m, δ and γ . Since it has three, when performing a grid search, it can be quite slow, specially compared to other methods such as PE, that only has m to optimize. In this work we have explored the possibility of combining SlpEn with downsampling techniques, such as TS and UDS, not only to try to reduce its computation time by downsampling the temporal series under study, but also to enhance its performance by maximizing the most prominent patterns present in the sequences in the case of TS, while working in different temporal scales and removing noise when using UDS. In addition, downsampling can also be applied to other entropy calculation techniques, as the results have shown with the case of PE and the work [37] with Sample Entropy with TS, and also with temporal scales, which are specific cases of UDS [38,89,90,91].
It is evident from Table 1 and Table 2 that using TS to downsample the original sequence is quite beneficial, since the accuracy both in SlpEn and PE increases in four out of seven of cases and five out of seven, respectively. For SlpEn, depending on the dataset, the results are enhanced between 4% and 13%. In datasets where results are not enhanced, they only worsen by 4% at maximum, which may be worth it when compared to the time we gain thanks to downsampling. On the other hand, PE seems to benefit more from using TS, since it increases its accuracy results up to 22%, except in the case of the Bonn EEG dataset, where the results worsen considerably by 14%. Comparing the two of them, it appears that PE has its highest increase in performance in the datasets that initially had the worst results (Bern-Barcelona, Fantasia and House Twenty) compared to the initial results of SlpEn. On the other hand, the performance enhancement is less prominent, under 10%, in the datasets where both SlpEn and PE have similar results when TS is not applied.
Regarding the downsampling percentages, for SlpEn, most of them are between 25% and 50% (5 out of 7). In the case of PE, the range varies greatly, being between 10% and 31%, with two datasets with the lowest downsampling percentage tested (10%). From this results we can extract that downsampling below 50% of the data can be highly beneficial, since, both for PE and SlpEn, 11 out of 14 datasets increase their accuracy. Ranges of less than 25% are also quite promising in the case of PE (4 out of 7), whereas SlpEn tends to benefit more for percentages closer to 50% (3 datasets within the vicinity of 40% to 50%). Additionally, it is extremely important to highlight that, depending on the number of available samples, specially if this number is not high, the performance might be hindered because the temporal series result of the downsampling may end up being too short.
In the case of UDS, we can extract similar conclusions from Table 4 and Table 5. Classification results for both SlpEn and PE highly benefit from UDS, even more than with combined with TS. With UDS combined with SlpEn, the maximum gain is a bit more conservative (up to 10% in the Fantasia dataset compared to the 13% obtained with TS), but there is no instance where the performance worsens, only with the Bonn EEG dataset it is has no improvement and maintains its accuracy compared when no downsampling is applied. Once again, when UDS is used with PE, the results are similar: in the case where the accuracy levels decrease, they do so in a smaller scale (−9% compared to the −14% obtained in the Bonn EEG dataset).
Results from these tables, both when TS and UDS are used accompanying SlpEn and PE, seem to confirm our initial hypothesis. That is, data which is sampled at a much higher rate than needed is adding a lot of possible confounding data to the sequence. Performing downsampling, we are enhancing the main patterns and reducing this unnecessary data, thus keeping key features that will be used to enhance classification performance. Moreover, it seems performance increases more in cases where the sampling rate in Hz is higher, as it is the case with both the ECG and EEG datasets. In the case of ECG, the Fantasia RR database (sampled at 250 Hz), compared to the PAF dataset (sampled at 128 Hz), has a much higher increase in classification accuracy when used downsampling, with a maximum of 20% when UDS is used with PE, compared to a maximum of 8% with the same settings. Very similar behaviour exhibit the EEG datasets: Bern-Barcelona (sampled at 512 Hz) benefits much more from downsampling than the Bonn EEG dataset (sampled at 173.61 Hz). In this case, the first one increases its accuracy by 21% with UDS and PE, whereas the second datasets sees its performance not increased in the best case, where UDS and SlpEn are combined, and hindered in the remaining cases. Regarding the three non-medical datasets, they depend on the dataset. Both House Twenty and Worms datasets seem to benefit from both techniques of downsampling, specially in the case of House Twenty. Finally, in the case of Ford Machinery A, downsampling seems to be a hinderance, but we do not know the exact sampling rate for these temporal series.
From Table 7 we can extract another major advantage from using downsampling: the computation time is reduced a lot. Downsampling the original sequences leads to less than half the time used (with TS) when reducing the samples to 50%, both for SlpEn and PE. With UDS, the algorithm is a bit slower, so they the execution times do not halve, but are close to. Of course, the more the number of data in the sequence is reduced by downsampling, the faster the results are calculated. Since those two tables refer to the whole grid search explained in the previous section, the gain in seconds is huge, specially in the case of SlpEn, since it has more combinations to check. It is worth noting that, in the case of our implementations of TS and UDS, UDS is slower, so it leads to slower times, which in any case are by no means negligible. In addition, with smaller downsampling percentages, times for both TS and UDS converge, as one would expect.
Table 8 illustrates similar information, but in this case they focus on individual entropy calculations for different m values. It is clear that, on an individual basis, SlpEn is faster to calculate than PE, and the speedups are very similar between the two entropies, being a bit higher in the case of PE. This is more noticeable in the cases where m is 8 and 9, and also in the last column, where the percentage of samples used is only 10. In this case, in terms of physical seconds, the difference is higher for PE, making it much faster when using downsampling compared to the case with no downsampling. Furthermore, there are many instances (at least 3 out of 7), both with SlpEn and PE, where the maximum accuracy has beet attained using high m values ( m 6 ) as it can be observed in Table 3 and Table 6, thus implying great potential for improvement in execution time. It is also worth noting that the downsampling algorithm used to obtain these execution times is not relevant, since they only take into account the time it takes to perform entropy calculations, excluding the time it takes to downsample.
Figure 3 and Figure 4 show graphs that represent the maximum accuracy achieved for each percentage of downsampling, including also the case without it, which corresponds to the rightmost value of the curves. In general, SlpEn outperforms PE. We can arguably say so in 5 out of 7 datasets (Bonn EEG, Bern-Barcelona, Fantasia, Ford A, House Twenty), regardless of the downsampling technique used. It is also noticeable that in most cases, both for SlpEn and PE, there are many values of downsampling that outperform the accuracy level obtained without using any downsampling technique, which can help to reduce testing to find out any downsampling percentage that is better than just calculating the entropy.
Finally, the rest of the Figure 5, Figure 6 and Figure 7 are a compendium of heatmaps that represent the whole result of the grid search in terms of accuracy. They confront the level of downsampling used (TS% or UDS%) to m (and also to γ in the case of Figure 6). For SlpEn, comparing the last row (no downsampling) to the rest, we can observe that there are many instances (both in the same column and in other columns) where the accuracy is higher than the one achieved without TS. There are some cases where, for the same value of m, the best accuracy is obtained using the whole data series with no downsampling. Having said that, even though the accuracy is not as high, it can be compensated by the execution speed gain, since with just 50% of the samples the speedup is 2, and the accuracy loss is almost minimal. Moreover, these maps show that high accuracy normally gathers up in some kind of shape: curves, circles, etc., which can be useful to optimize the search and avoid using grid search as an optimization method, which in theory is much more costly. For example, we could use heuristic methods that look for local or absolute maximums. The same thing can be said of Figure 7, where PE is compared against UDS%. Heatmaps reporting SlpEn combined with UDS and PE with TS have not been reported since they are very similar to the ones presented in this work, thus we can extract very similar conclusions, if not the same.
Overall, the outcomes of the experiments highlight the high potential that downsampling offers: it is easy to implement, cheap in terms of computational resources and in most cases it is able to not only reduce computation time, but also to enhance classification results in terms of accuracy. However, the search for the right downsampling percentage might increment the computational cost, in the case one wants to maximise performance. In other cases, we recommend the use of TS, performing downsampling in the interval between 25% and 50% for SlpEn and between 10% and 30% for PE, which normally outputs the best results. In addition, we believe percentages below 50% might be a good start for any kind of entropy, not just for SlpEn or PE, specially values in between 20% and 45%. In case such results are not satisfactory, one could try to calculate them with no downsampling to compare and help to decide, since the improvements in reduction of the execution time will always be there, regardless of how well it performs in classification tasks. Moreover, it would seem the best case scenario would be to sample data at the minimum rate necessary, if possible, thus avoiding adding extra data which may lead classification algorithms to lower performance.

5. Conclusions

This study addressed the combination of SlpEn and downsampling, mainly TS and UDS, aimed at enhancing time series classification accuracy, as well as reducing computation time. In addition, our work also has as a secondary objective proving that downsampling can keep its advantages when being used in combination with any other entropy calculation method, as suggested by [37] with Sample Entropy and TS, many cases where UDS is applied indirectly in the form of temporal scales [38,89] and our experiments with both SlpEn and PE with TS and UDS. Our experimentation highlights the benefits of combining downsampling with entropy in classification tasks, showing great improvements both in accuracy results and in computation time.
In general, our results suggest that using TS as a downsampling technique that amplifies the most prominent patterns present in a temporal sequence is beneficial and very helpful when computing its entropy to use as a feature in classification tasks. In most cases, the accuracy levels achieved outperform those of the instances where downsampling has not been applied, both for SlpEn and PE. Of course, this is not true in all scenarios, specially when the resulting sequences have too few samples, so one must apply the downsampling technique carefully to avoid such cases. Notably, the results specially benefit instances where the initial classification accuracy is not very high and leaves room for improvements. Regarding UDS as a technique to reduce noise and focus at different temporal scales when using specific downsampling percentages, we obtain similar results to those obtained with TS. Both PE and SlpEn benefit from UDS in most of the datasets used, having more room for improvement in cases where classification accuracy is initially low, just like it is the case with TS. Moreover, our experiments lead us to believe that sampling rates above the minimum required one might have a negative impact in classification tasks since confounding data is added to the temporal series and the key features less apparent, thus making it harder to classify easily or reliably.
Additionally, both downsampling techniques are very good options in optimization of computational cost. It is able to reduce computation time consistently and independently of the data being treated, providing speedups of approximately 50% when downsampling the original sequence to half of its samples. That is specially notable when the computation times are the slowest, since the amount of time in seconds that downsampling is able to reduce is much higher compared to instances where computation times are faster, even though they have a similar speedup. That can happen when the embedding values (m) are quite high, as it is in many of our results, thus making the computation times slower. Moreover, according to [92], high embedding dimension values can have higher performance in many cases, even if the generally accepted inequality m ! < < N (where N is the length of the sequence under study) suggests otherwise. In such cases, downsampling would become very useful to reduce computation times.
Other optimization techniques might include making use of more powerful computing hardware, faster programming languages, refining entropy algorithms or parallel processing, but those are not as simple as the downsampling schemes used in this paper. Furthermore, using heuristic techniques when optimizing entropy parameters and downsampling levels instead of our “brute-force” grid-search approach could significantly contribute in the reduction of computational cost in terms of timing and could be beneficial not only to the approach suggested in this work, but to any other research in the field.
As a summary, we encourage the use of downsampling to process the data present in temporal sequences before calculating its entropy: it is able not only to enhance classification results, but also to reduce computational cost. Of course, one must find a good compromise between speedup and classification results, since not in all cases downsampling will help entropy to outperform itself when using the entire series of data. As an initial recommendation, we would choose the interval [25%, 50%] to perform downsampling using TS, being another possibility [10%, 30%] when using UDS, although this will always depend on the intrinsic characteristics of the dataset under study.

Author Contributions

Conceptualization, D.C.-F. and V.M.-G.; Methodology, V.M.-G.; Software, V.M.-G.; Validation, D.C.-F. and M.K.; Formal analysis, V.M.-G.; Data curation, V.M.-G.; Writing—original draft preparation, V.M.-G.; Writing—review and editing, D.C.-F., V.M.-G. and M.K.; Supervision, D.C.-F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All real datasets used in this paper are well known and publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Slope Entropy Individual Timings

This appendix contains the tables in which the time it takes to obtain individual values of SlpEn timings using different downsampling percentages is represented. Once again, no difference is made between TS and UDS, since the time they take to compute is not taken into account. There is one table for each one of the datasets, except for the Bonn EEG Dataset, which is directly included in the paper in Table 9.
Table A1. Time reported in seconds for the Bern-Barcelona dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table A1. Time reported in seconds for the Bern-Barcelona dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.970.730.520.260.11
41.050.750.540.270.12
51.140.800.570.290.15
61.260.880.630.340.22
71.501.040.760.420.33
81.921.320.990.580.46
92.641.791.370.850.62
Table A2. Time reported in seconds for the Fantasia dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table A2. Time reported in seconds for the Fantasia dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.290.220.150.080.03
40.370.260.190.100.04
50.520.380.300.170.07
60.970.740.610.350.12
72.111.631.340.680.18
84.213.162.531.050.22
97.515.333.791.350.25
Table A3. Time reported in seconds for the Ford Machinery A dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table A3. Time reported in seconds for the Ford Machinery A dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.040.030.020.010.01
40.040.030.020.020.01
50.050.030.020.020.01
60.050.030.030.020.01
70.050.040.030.030.01
80.050.040.030.030.01
90.060.040.040.030.01
Table A4. Time reported in seconds for the House Twenty dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table A4. Time reported in seconds for the House Twenty dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.040.030.020.010.01
40.050.030.030.020.01
50.070.050.040.020.01
60.120.080.060.030.01
70.220.120.080.030.01
80.340.160.100.040.01
90.450.190.110.040.01
Table A5. Time reported in seconds for the PAF prediction dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table A5. Time reported in seconds for the PAF prediction dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.030.020.010.010.00
40.030.020.020.010.01
50.050.030.020.010.01
60.070.040.030.010.01
70.090.050.030.010.01
80.110.060.030.010.01
90.120.060.040.010.01
Table A6. Time reported in seconds for the Worms dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table A6. Time reported in seconds for the Worms dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.170.120.090.050.02
40.180.130.090.050.03
50.220.140.100.050.03
60.270.170.120.060.03
70.380.210.140.070.03
80.510.270.180.080.04
90.710.350.210.090.04

Appendix A.2. Permutation Entropy Individual Timings

This appendix contains the tables in which the time it takes to obtain individual values of PE timings using different downsampling percentages is represented. Once again, no difference is made between TS and UDS, since the time they take to compute is not taken into account. There is one table for each one of the datasets, except for the Bonn EEG Dataset, which is directly included in the paper in Table 10.
Table A7. Time reported in seconds for the Bern-Barcelona dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table A7. Time reported in seconds for the Bern-Barcelona dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
31.491.060.770.390.16
41.921.360.990.510.21
52.611.831.330.710.31
63.612.642.001.140.54
75.924.493.492.120.90
811.608.656.523.571.20
922.4915.3210.695.051.61
Table A8. Time reported in seconds for the Fantasia dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table A8. Time reported in seconds for the Fantasia dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
30.420.310.220.110.05
40.580.410.290.150.06
50.890.640.460.240.10
61.961.541.120.560.20
76.765.133.421.260.29
815.179.615.721.670.33
921.9111.866.491.800.35
Table A9. Time reported in seconds for the Ford Machinery A dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table A9. Time reported in seconds for the Ford Machinery A dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
30.060.050.040.020.01
40.080.060.040.020.01
50.100.070.050.030.01
60.120.090.070.040.02
70.150.110.080.050.02
80.180.130.100.050.02
90.220.160.120.060.02
Table A10. Time reported in seconds for the House Twenty dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table A10. Time reported in seconds for the House Twenty dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
30.060.050.030.020.01
40.080.060.040.020.01
50.120.080.060.030.01
60.230.140.100.050.02
70.400.210.140.060.02
80.520.250.170.060.02
90.610.280.180.070.02
Table A11. Time reported in seconds for the PAF prediction dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table A11. Time reported in seconds for the PAF prediction dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
30.040.020.020.010.01
40.050.030.020.010.01
50.070.050.040.020.01
60.110.070.050.020.01
70.150.090.060.030.01
80.170.100.070.030.01
90.190.110.070.030.01
Table A12. Time reported in seconds for the Worms dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table A12. Time reported in seconds for the Worms dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
30.250.180.130.070.04
40.330.230.170.090.04
50.470.320.230.120.05
60.730.460.310.150.06
71.040.610.400.180.07
81.300.730.480.210.08
91.560.870.560.250.09

Appendix B

This appendix contains the heatmaps where SlpEn is combined with UDS, as well as where PE is combined with TS. The other combinations (SlpEn+TS and PE+UDS) are shown in Figure 5, Figure 6 and Figure 7, located in Section 3.2.
Figure A1. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using UDS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure A1. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using UDS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g0a1
Figure A2. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using UDS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of γ . Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure A2. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using UDS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of γ . Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g0a2
Figure A3. Heatmap representing accuracy levels of PE. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using TS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure A3. Heatmap representing accuracy levels of PE. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using TS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g0a3

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Rényi, A. On Measures of Information and Entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 1 January 1961; Volume 1, pp. 547–561. [Google Scholar]
  3. Tsallis, C. Possible Generalization of Boltzmann–Gibbs Statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  4. Li, J.; Shang, P.; Zhang, X. Financial time series analysis based on fractional and multiscale permutation entropy. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104880. [Google Scholar] [CrossRef]
  5. Henry, M.; Judge, G. Permutation entropy and information recovery in nonlinear dynamic economic time series. Econometrics 2019, 7, 10. [Google Scholar] [CrossRef]
  6. Liu, F.; Fan, H.Y.; Qi, J.Y. Blockchain Technology, Cryptocurrency: Entropy-Based Perspective. Entropy 2022, 24, 557. [Google Scholar] [CrossRef] [PubMed]
  7. Siokis, F. High short interest stocks performance during the COVID-19 crisis: An informational efficacy measure based on permutation-entropy approach. J. Econ. Stud. 2023, 50, 1570–1584. [Google Scholar] [CrossRef]
  8. Vargas, B.; Cuesta-Frau, D.; Ruiz-Esteban, R.; Cirugeda, E.; Varela, M. What Can Biosignal Entropy Tell Us About Health and Disease? Applications in Some Clinical Fields. Nonlinear Dyn. Psychol. Life Sci. 2015, 19, 419–436. [Google Scholar]
  9. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J.-Physiol.-Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [PubMed]
  10. Chen, W.; Wang, Y.; Ren, Y.; Jiang, H.; Du, G.; Zhang, J.; Li, J. An automated detection of epileptic seizures EEG using CNN classifier based on feature fusion with high accuracy. BMC Med. Inform. Decis. Mak. 2023, 23, 96. [Google Scholar] [CrossRef]
  11. Li, P.; Liu, C.; Li, K.; Zheng, D.; Liu, C.; Hou, Y. Assessing the complexity of short-term heartbeat interval series by distribution entropy. Med. Biol. Eng. Comput. 2015, 53, 77–87. [Google Scholar] [CrossRef]
  12. Gaudêncio, A.S.; Hilal, M.; Cardoso, J.M.; Humeau-Heurtier, A.; Vaz, P.G. Texture analysis using two-dimensional permutation entropy and amplitude-aware permutation entropy. Pattern Recognit. Lett. 2022, 159, 150–156. [Google Scholar] [CrossRef]
  13. Huang, S.; Wang, X.; Li, C.; Kang, C. Data decomposition method combining permutation entropy and spectral substitution with ensemble empirical mode decomposition. Meas. J. Int. Meas. Confed. 2019, 139, 438–453. [Google Scholar] [CrossRef]
  14. Ruiz-Aguilar, J.J.; Turias, I.; González-Enrique, J.; Urda, D.; Elizondo, D. A permutation entropy-based EMD–ANN forecasting ensemble approach for wind speed prediction. Neural Comput. Appl. 2021, 33, 2369–2391. [Google Scholar] [CrossRef]
  15. Jiang, W.; Shan, Y.; Xue, X.; Ma, J.; Chen, Z.; Zhang, N. Fault Diagnosis for Rolling Bearing of Combine Harvester Based on Composite-Scale-Variable Dispersion Entropy and Self-Optimization Variational Mode Decomposition Algorithm. Entropy 2023, 25, 1111. [Google Scholar] [CrossRef]
  16. Şeker, M.; Özbek, Y.; Yener, G.; Özerdem, M.S. Complexity of EEG Dynamics for Early Diagnosis of Alzheimer’s Disease Using Permutation Entropy Neuromarker. Comput. Methods Programs Biomed. 2021, 206, 106116. [Google Scholar] [CrossRef]
  17. Dastgoshadeh, M.; Rabiei, Z. Detection of epileptic seizures through EEG signals using entropy features and ensemble learning. Front. Hum. Neurosci. 2023, 16, 1084061. [Google Scholar] [CrossRef]
  18. Aljalal, M.; Aldosari, S.A.; Molinas, M.; AlSharabi, K.; Alturki, F.A. Detection of Parkinson’s disease from EEG signals using discrete wavelet transform, different entropy measures, and machine learning techniques. Sci. Rep. 2022, 12, 22547. [Google Scholar] [CrossRef]
  19. Vidivelli, S.; Devi, S.S. Breast cancer detection model using fuzzy entropy segmentation and ensemble classification. Biomed. Signal Process. Control 2023, 80, 104236. [Google Scholar] [CrossRef]
  20. Ahmad, R.; Awais, M.; Kausar, N.; Akram, T. White Blood Cells Classification Using Entropy-Controlled Deep Features Optimization. Diagnostics 2023, 13, 352. [Google Scholar] [CrossRef]
  21. Cuesta-Frau, D. Slope Entropy: A New Time Series Complexity Estimator Based on Both Symbolic Patterns and Amplitude Information. Entropy 2019, 21, 1167. [Google Scholar] [CrossRef]
  22. Bandt, C.; Pompe, B. Permutation Entropy: A Natural Complexity Measure for Time Series. Phys. Rev. Lett. 2002, 88, 174102. [Google Scholar] [CrossRef]
  23. Cuesta-Frau, D.; Schneider, J.; Bakštein, E.; Vostatek, P.; Spaniel, F.; Novák, D. Classification of Actigraphy Records from Bipolar Disorder Patients Using Slope Entropy: A Feasibility Study. Entropy 2020, 22, 1243. [Google Scholar] [CrossRef]
  24. Cuesta-Frau, D.; Dakappa, P.H.; Mahabala, C.; Gupta, A.R. Fever Time Series Analysis Using Slope Entropy. Application to Early Unobtrusive Differential Diagnosis. Entropy 2020, 22, 1034. [Google Scholar] [CrossRef] [PubMed]
  25. Shi, E. Single Feature Extraction Method of Bearing Fault Signals Based on Slope Entropy. Shock Vib. 2022, 2022, 6808641. [Google Scholar] [CrossRef]
  26. Delgado-Bonal, A.; Marshak, A. Approximate entropy and sample entropy: A comprehensive tutorial. Entropy 2019, 21, 541. [Google Scholar] [CrossRef] [PubMed]
  27. Kouka, M.; Cuesta-Frau, D. Slope Entropy Characterisation: The Role of the δ Parameter. Entropy 2022, 24, 1456. [Google Scholar] [CrossRef]
  28. Kouka, M.; Cuesta-Frau, D.; Moltó-Gallego, V. Slope Entropy Characterisation: An Asymmetric Approach to Threshold Parameters Role Analysis. Entropy 2024, 26, 82. [Google Scholar] [CrossRef]
  29. Li, Y.; Gao, P.; Tang, B.; Yi, Y.; Zhang, J. Double feature extraction method of ship-radiated noise signal based on slope entropy and permutation entropy. Entropy 2022, 24, 22. [Google Scholar] [CrossRef]
  30. Li, Y.; Tang, B.; Yi, Y. A novel complexity-based mode feature representation for feature extraction of ship-radiated noise using VMD and slope entropy. Appl. Acoust. 2022, 196, 108899. [Google Scholar] [CrossRef]
  31. Li, Y.; Tang, B.; Jiao, S. Optimized Ship-Radiated Noise Feature Extraction Approaches Based on CEEMDAN and Slope Entropy. Entropy 2022, 24, 1265. [Google Scholar] [CrossRef]
  32. Li, Y.; Tang, B.; Jiao, S. SO-slope entropy coupled with SVMD: A novel adaptive feature extraction method for ship-radiated noise. Ocean Eng. 2023, 280, 114677. [Google Scholar] [CrossRef]
  33. Li, Y.; Tang, B.; Jiao, S.; Su, Q. Snake Optimization-Based Variable-Step Multiscale Single Threshold Slope Entropy for Complexity Analysis of Signals. IEEE Trans. Instrum. Meas. 2023, 72, 1–13. [Google Scholar] [CrossRef]
  34. Chia, C.C.; Syed, Z. Using Adaptive Downsampling to Compare Time Series with Warping. In Proceedings of the 2010 IEEE International Conference on Data Mining Workshops, Sydney, Australia, 13 December 2010; pp. 1304–1311. [Google Scholar] [CrossRef]
  35. Eng, F.; Gustafsson, F. Algorithms for downsampling non-uniformly sampled data. In Proceedings of the 2007 15th European Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 1965–1969. [Google Scholar]
  36. Weber, I.; Florin, E.; Papen, M.V.; Timmermann, L. The influence of filtering and downsampling on the estimation of transfer entropy. PLoS ONE 2017, 12, e0188210. [Google Scholar] [CrossRef]
  37. Cuesta-Frau, D.; Miró-Martínez, P.; Oltra-Crespo, S.; Molina-Picó, A.; Dakappa, P.H.; Mahabala, C.; Vargas, B.; González, P. Classification of fever patterns using a single extracted entropy feature: A feasibility study based on sample entropy. Math. Biosci. Eng. 2020, 17, 235–249. [Google Scholar] [CrossRef] [PubMed]
  38. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale Entropy Analysis of Complex Physiologic Time Series. Phys. Rev. Lett. 2002, 89, 068102. [Google Scholar] [CrossRef]
  39. Nyquist, H. Certain Topics in Telegraph Transmission Theory. Trans. Am. Inst. Electr. Eng. 1928, 47, 617–644. [Google Scholar] [CrossRef]
  40. Shannon, C.E. Communication in the Presence of Noise. Proc. IRE 1949, 37, 10–21. [Google Scholar] [CrossRef]
  41. Raghu, S.; Sriraam, N.; Gommer, E.D.; Hilkman, D.M.; Temel, Y.; Rao, S.V.; Hegde, A.S.; Kubben, P.L. Cross-database evaluation of EEG based epileptic seizures detection driven by adaptive median feature baseline correction. Clin. Neurophysiol. 2020, 131, 1567–1578. [Google Scholar] [CrossRef]
  42. Abou-Abbas, L.; Jemal, I.; Henni, K.; Ouakrim, Y.; Mitiche, A.; Mezghani, N. EEG Oscillatory Power and Complexity for Epileptic Seizure Detection. Appl. Sci. 2022, 12, 4181. [Google Scholar] [CrossRef]
  43. Thomas, J.; Thangavel, P.; Peh, W.Y.; Jing, J.; Yuvaraj, R.; Cash, S.S.; Chaudhari, R.; Karia, S.; Rathakrishnan, R.; Saini, V.; et al. Automated Adult Epilepsy Diagnostic Tool Based on Interictal Scalp Electroencephalogram Characteristics: A Six-Center Study. Int. J. Neural Syst. 2021, 31, 2050074. [Google Scholar] [CrossRef]
  44. Shi, C.; Gao, J.; Yu, J.; Zhao, L.; Jia, F. A novel similarity-constrained feature selection method for epilepsy detection via EEG signals. J. King Saud Univ.-Comput. Inf. Sci. 2025, 37, 141. [Google Scholar] [CrossRef]
  45. Task Force of the European Society of Cardiology; The North American Society of Pacing; Electrophysiology. Heart Rate Variability: Standards of Measurement, Physiological Interpretation, and Clinical Use. Circulation 1996, 93, 1043–1065. [Google Scholar] [CrossRef]
  46. Singh, A. Approximate Entropy (ApEn) based Heart Rate Variability Analysis. Indian J. Sci. Technol. 2016, 9, 1–4. [Google Scholar] [CrossRef]
  47. Morales, J.; Moeyersons, J.; Armanac, P.; Orini, M.; Faes, L.; Overeem, S.; Gilst, M.V.; Dijk, J.V.; Huffel, S.V.; Bailon, R.; et al. Model-Based Evaluation of Methods for Respiratory Sinus Arrhythmia Estimation. IEEE Trans. Biomed. Eng. 2021, 68, 1882–1893. [Google Scholar] [CrossRef]
  48. Xin, Y.; Zhao, Y. Paroxysmal atrial fibrillation recognition based on multi-scale wavelet α-entropy. BioMed. Eng. Online 2017, 16, 121. [Google Scholar] [CrossRef]
  49. Xin, Y.; Zhao, Y.; Mu, Y.; Li, Q.; Shi, C. Paroxysmal atrial fibrillation recognition based on multi-scale Rényi entropy of ECG. Technol. Health Care 2017, 25 (Suppl. S1), S189–S196. [Google Scholar] [CrossRef]
  50. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef]
  51. Tsipouras, M.G. Spectral information of EEG signals with respect to epilepsy classification. Eurasip J. Adv. Signal Process. 2019, 2019, 10. [Google Scholar] [CrossRef]
  52. Zaid, Y.; Sah, M.; Direkoglu, C. Pre-processed and combined EEG data for epileptic seizure classification using deep learning. Biomed. Signal Process. Control 2023, 84, 104738. [Google Scholar] [CrossRef]
  53. Wong, S.; Simmons, A.; Rivera-Villicana, J.; Barnett, S.; Sivathamboo, S.; Perucca, P.; Ge, Z.; Kwan, P.; Kuhlmann, L.; Vasa, R.; et al. EEG datasets for seizure detection and prediction—A review. Epilepsia Open 2023, 8, 252–267. [Google Scholar] [CrossRef]
  54. Andrzejak, R.G.; Schindler, K.; Rummel, C. Nonrandomness, nonlinear dependence, and nonstationarity of electroencephalographic recordings from epilepsy patients. Phys. Rev. E 2012, 86, 046206. [Google Scholar] [CrossRef] [PubMed]
  55. Acharya, U.R.; Hagiwara, Y.; Deshpande, S.N.; Suren, S.; Koh, J.E.W.; Oh, S.L.; Arunkumar, N.; Ciaccio, E.J.; Lim, C.M. Characterization of focal EEG signals: A review. Future Gener. Comput. Syst. 2018, 91, 290–299. [Google Scholar] [CrossRef]
  56. Kumar, M.R.; Rao, Y.S. Epileptic seizures classification in EEG signal based on semantic features and variational mode decomposition. Clust. Comput. 2019, 22, 13521–13531. [Google Scholar] [CrossRef]
  57. Arunkumar, N.; Kumar, K.R.; Venkataraman, V. Entropy features for focal EEG and non focal EEG. J. Comput. Sci. 2018, 27, 440–444. [Google Scholar] [CrossRef]
  58. Iyengar, N.; Peng, C.; Morin, R.; Goldberger, A.L.; Lipsitz, L.A. Age-related alterations in the fractal scaling of cardiac interbeat interval dynamics. Am. J.-Physiol.-Regul. Integr. Comp. Physiol. 1996, 271, R1078–R1084. [Google Scholar] [CrossRef]
  59. Sharma, K.; Sunkaria, R.K. Cardiac arrhythmia detection using cross-sample entropy measure based on short and long RR interval series. J. Arrhythmia 2023, 39, 412–421. [Google Scholar] [CrossRef]
  60. Xiao, H.; Mandic, D.P. Variational Embedding Multiscale Sample Entropy: A Tool for Complexity Analysis of Multichannel Systems. Entropy 2022, 24, 26. [Google Scholar] [CrossRef]
  61. FordA Description. Available online: http://www.timeseriesclassification.com/description.php?Dataset=FordA (accessed on 23 September 2025).
  62. Makridis, G.; Fatouros, G.; Koukos, V.; Kotios, D.; Kyriazis, D.; Soldatos, I. XAI for time-series classification leveraging image highlight methods. arXiv 2023, arXiv:2311.17110. [Google Scholar] [CrossRef]
  63. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Adversarial Attacks on Deep Neural Networks for Time Series Classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar] [CrossRef]
  64. HouseTwenty Description. Available online: http://www.timeseriesclassification.com/description.php?Dataset=HouseTwenty (accessed on 23 September 2025).
  65. Murray, D.; Liao, J.; Stankovic, L.; Stankovic, V.; Hauxwell-Baldwin, R.; Wilson, C.; Coleman, M.; Kane, T.; Firth, S. A data management platform for personalised real-time energy feedback. In Proceedings of the 8th International Conference on Energy Efficiency in Domestic Appliances and Lighting, Horw-Lucerne, Switzerland, 26–28 August 2015. [Google Scholar]
  66. Dau, H.A.; Bagnall, A.; Kamgar, K.; Yeh, C.C.M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C.A.; Keogh, E. The UCR time series archive. IEEE/CAA J. Autom. Sin. 2019, 6, 1293–1305. [Google Scholar] [CrossRef]
  67. Dau, H.A.; Keogh, E.; Kamgar, K.; Yeh, C.C.M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C.A.; Chen, Y.; Hu, B.; Begum, N.; et al. The UCR Time Series Classification Archive. 2018. Available online: https://www.cs.ucr.edu/~eamonn/time_series_data_2018 (accessed on 23 September 2025).
  68. Moody, G.; Goldberger, A.; McClennen, S.; Swiryn, S. Predicting the onset of paroxysmal atrial fibrillation: The Computers in Cardiology Challenge 2001. In Proceedings of the Computers in Cardiology 2001, Rotterdam, The Netherlands, 23–26 September 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 28, pp. 113–116. [Google Scholar] [CrossRef]
  69. Mendez, M.M.; Hsu, M.C.; Yuan, J.T.; Lynn, K.S. A Heart Rate Variability-Based Paroxysmal Atrial Fibrillation Prediction System. Appl. Sci. 2022, 12, 2387. [Google Scholar] [CrossRef]
  70. Wang, L.H.; Yan, Z.H.; Yang, Y.T.; Chen, J.Y.; Yang, T.; Kuo, I.C.; Abu, P.A.R.; Huang, P.C.; Chen, C.A.; Chen, S.L. A Classification and Prediction Hybrid Model Construction with the IQPSO-SVM Algorithm for Atrial Fibrillation Arrhythmia. Sensors 2021, 21, 5222. [Google Scholar] [CrossRef]
  71. Olier, I.; Ortega-Martorell, S.; Pieroni, M.; Lip, G.Y.H. How machine learning is impacting research in atrial fibrillation: Implications for risk prediction and future management. Cardiovasc. Res. 2021, 117, 1700–1717. [Google Scholar] [CrossRef]
  72. WormsTwoClass Description. Available online: http://www.timeseriesclassification.com/description.php?Dataset=WormsTwoClass (accessed on 23 September 2025).
  73. Yemini, E.; Jucikas, T.; Grundy, L.J.; Brown, A.E.; Schafer, W.R. A database of Caenorhabditis elegans behavioral phenotypes. Nat. Methods 2013, 10, 877–879. [Google Scholar] [CrossRef] [PubMed]
  74. Lee, S.H.; Park, C.M. Novel Features for Binary Time Series Based on Branch Length Similarity Entropy. Entropy 2021, 23, 480. [Google Scholar] [CrossRef] [PubMed]
  75. Thomas, A.; Bates, K.; Elchesen, A.; Hartsock, I.; Lu, H.; Bubenik, P. Topological Data Analysis of C. elegans Locomotion and Behavior. Front. Artif. Intell. 2021, 4, 668395. [Google Scholar] [CrossRef] [PubMed]
  76. Cuesta-Frau, D.; Varela-Entrecanales, M.; Molina-Picó, A.; Vargas, B. Patterns with equal values in permutation entropy: Do they really matter for biosignal classification? Complexity 2018, 2018, 1324696. [Google Scholar] [CrossRef]
  77. Zunino, L.; Olivares, F.; Scholkmann, F.; Rosso, O.A. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions. Phys. Lett. A 2017, 381, 1883–1892. [Google Scholar] [CrossRef]
  78. Vargas, B.; Cuesta-Frau, D.; González-López, P.; Fernández-Cotarelo, M.J.; Vázquez-Gómez, Ó.; Colás, A.; Varela, M. Discriminating Bacterial Infection from Other Causes of Fever Using Body Temperature Entropy Analysis. Entropy 2022, 24, 510. [Google Scholar] [CrossRef]
  79. Li, Y.; Mu, L.; Gao, P. Particle Swarm Optimization Fractional Slope Entropy: A New Time Series Complexity Indicator for Bearing Fault Diagnosis. Fractal Fract. 2022, 6, 345. [Google Scholar] [CrossRef]
  80. Li, Y.; Tang, B.; Huang, B.; Xue, X. A Dual-Optimization Fault Diagnosis Method for Rolling Bearings Based on Hierarchical Slope Entropy and SVM Synergized with Shark Optimization Algorithm. Sensors 2023, 23, 5630. [Google Scholar] [CrossRef]
  81. Liu, T.; Yao, W.; Wu, M.; Shi, Z.; Wang, J.; Ning, X. Multiscale permutation entropy analysis of electrocardiogram. Phys. A Stat. Mech. Its Appl. 2017, 471, 492–498. [Google Scholar] [CrossRef]
  82. Mateos, D.; Diaz, J.; Lamberti, P. Permutation Entropy Applied to the Characterization of the Clinical Evolution of Epileptic Patients under Pharmacological Treatment. Entropy 2014, 16, 5668–5676. [Google Scholar] [CrossRef]
  83. Keller, K.; Mangold, T.; Stolz, I.; Werner, J. Permutation entropy: New ideas and challenges. Entropy 2017, 19, 134. [Google Scholar] [CrossRef]
  84. Wang, Z.; Yao, L.; Chen, G.; Ding, J. Modified multiscale weighted permutation entropy and optimized support vector machine method for rolling bearing fault diagnosis with complex signals. ISA Trans. 2021, 114, 470–484. [Google Scholar] [CrossRef] [PubMed]
  85. Karimi-Arpanahi, S.; Pourmousavi, S.A. Efficient anomaly detection method for rooftop PV systems using big data and permutation entropy. In Proceedings of the 32nd Australasian Universities Power Engineering Conference (AUPEC), Adelaide, Australia, 26–28 September 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  86. Cuesta-Frau, D.; Perez-Cortés, J.C.; Andreu-Garcia, G.; Novak, D. Feature Extraction Methods Applied to the Clustering of Electrocardiographic Signals. A Comparative Study. In Proceedings of the 2002 International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; Volume 3, pp. 961–964. [Google Scholar] [CrossRef]
  87. Cuesta-Frau, D.; Pérez-Cortés, J.C.; Andreu-García, G. Clustering of electrocardiograph signals in computer-aided Holter analysis. Comput. Methods Programs Biomed. 2003, 72, 179–196. [Google Scholar] [CrossRef]
  88. Cuesta–Frau, D. Permutation entropy: Influence of amplitude information on time series classification performance. Math. Biosci. Eng. 2019, 16, 6842. [Google Scholar] [CrossRef]
  89. Wu, H. Multiscale entropy with electrocardiograph, electromyography, electroencephalography, and photoplethysmography signals in healthcare: A twelve-year systematic review. Biomed. Signal Process. Control 2024, 93, 106124. [Google Scholar] [CrossRef]
  90. Li, Y.; Tang, B.; Jiao, S.; Zhou, Y. Optimized multivariate multiscale slope entropy for nonlinear dynamic analysis of mechanical signals. Chaos, Solitons Fractals 2024, 179, 114436. [Google Scholar] [CrossRef]
  91. Li, Y.; Wu, J.; Yi, Y.; Ding, Q.; Yuan, Y.; Xue, X. Extended dispersion entropy and its multiscale versions: Methodology and application. Commun. Nonlinear Sci. Numer. Simul. 2025, 141, 108497. [Google Scholar] [CrossRef]
  92. Cuesta-Frau, D.; Murillo-Escobar, J.P.; Orrego, D.A.; Delgado-Trejos, E. Embedded dimension and time series length. Practical influence on permutation entropy and its applications. Entropy 2019, 21, 385. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of the standard SlpEn thresholding process, which uses a symmetric approach (same absolute values) for the thresholds used to differentiate regions, regardless of whether they are negative or positive. An infinite gradient would correspond to a vertical line.
Figure 1. Graphical representation of the standard SlpEn thresholding process, which uses a symmetric approach (same absolute values) for the thresholds used to differentiate regions, regardless of whether they are negative or positive. An infinite gradient would correspond to a vertical line.
Axioms 14 00797 g001
Figure 2. Flow diagram that depicts all the steps in which the experiments performed have been divided.
Figure 2. Flow diagram that depicts all the steps in which the experiments performed have been divided.
Axioms 14 00797 g002
Figure 3. Comparison of maximum accuracy for each TS% for SlpEn and PE. The last point (100) corresponds to the case where there is no downsampling. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure 3. Comparison of maximum accuracy for each TS% for SlpEn and PE. The last point (100) corresponds to the case where there is no downsampling. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g003
Figure 4. Comparison of maximum accuracy for each UDS% for SlpEn and PE. The last point (100) corresponds to the case where there is no downsampling. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure 4. Comparison of maximum accuracy for each UDS% for SlpEn and PE. The last point (100) corresponds to the case where there is no downsampling. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g004
Figure 5. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using TS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure 5. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using TS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g005
Figure 6. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using TS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of γ . Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure 6. Heatmap representing accuracy levels of SlpEn. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using TS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of γ . Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g006
Figure 7. Heatmap representing accuracy levels of PE. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using UDS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Figure 7. Heatmap representing accuracy levels of PE. The brighter the colour, the higher the accuracy. Each row represents the percentage of the samples from the temporal sequence extracted using UDS. Bottom row corresponds to the original sequence with no downsampling. The columns refer to different values of m. Images illustrate behaviour of each one of the datasets: (a) Bonn EEG, (b) Bern-Barcelona, (c) Fantasia, (d) Ford A, (e) House Twenty, (f) PAF prediction, (g) Worms two-class.
Axioms 14 00797 g007
Table 1. Classification accuracy achieved with SlpEn using TS. Results achieved with SlpEn with no TS are also reported for comparison, as well as the gain in accuracy terms (difference between not using TS and using TS) and the TS% used to maximise results.
Table 1. Classification accuracy achieved with SlpEn using TS. Results achieved with SlpEn with no TS are also reported for comparison, as well as the gain in accuracy terms (difference between not using TS and using TS) and the TS% used to maximise results.
DatasetSlpEn Without TSSlpEn with TSEnhancementTS%
Bonn EEG95%93%−2%45%
Bern–Barcelona81%85%4%13%
Fantasia85%98%13%27%
Ford Machinery A85%81%−4%48%
House Twenty95%95%0%64%
PAF prediction76%82%6%43%
Worms71%77%6%26%
Table 2. Classification accuracy achieved with PE using TS. Results achieved with PE with no TS are also reported for comparison, as well as the gain in accuracy terms (difference between not using TS and using TS) and the TS% used to maximise results.
Table 2. Classification accuracy achieved with PE using TS. Results achieved with PE with no TS are also reported for comparison, as well as the gain in accuracy terms (difference between not using TS and using TS) and the TS% used to maximise results.
DatasetPE Without TSPE with TSEnhancementTS%
Bonn EEG92%78%−14%70%
Bern–Barcelona63%81%18%10%
Fantasia68%90%22%18%
Ford Machinery A75%74%−1%56%
House Twenty68%83%15%10%
PAF prediction82%88%6%31%
Worms69%77%8%22%
Table 3. Parameters used to maximise the classification accuracy results shown in Table 1 and Table 2.
Table 3. Parameters used to maximise the classification accuracy results shown in Table 1 and Table 2.
DatasetSlpEn Without TSSlpEn with TSPE Without TSPE with TS
Bonn EEG m = 4 , γ = 0.1 m = 6 , γ = 0.2 m = 3 m = 3
Bern–Barcelona m = 6 , γ = 1.0 m = 7 , γ = 0.1 m = 9 m = 6
Fantasia m = 5 , γ = 0.5 m = 6 , γ = 0.3 m = 4 m = 6
Ford Machinery A m = 9 , γ = 0.3 m = 9 , γ = 1.0 m = 5 m = 4
House Twenty m = 3 , γ = 0.1 m = 4 , γ = 0.1 m = 9 m = 6
PAF prediction m = 3 , γ = 0.1 m = 8 , γ = 0.1 m = 3 m = 4
Worms m = 5 , γ = 0.2 m = 6 , γ = 0.6 m = 7 m = 6
Table 4. Classification accuracy achieved with SlpEn using UDS. Results achieved with SlpEn with no UDS are also reported for comparison, as well as the gain in accuracy terms (difference between not using UDS and using UDS) and the UDS% used to maximise results.
Table 4. Classification accuracy achieved with SlpEn using UDS. Results achieved with SlpEn with no UDS are also reported for comparison, as well as the gain in accuracy terms (difference between not using UDS and using UDS) and the UDS% used to maximise results.
DatasetSlpEn Without UDSSlpEn with UDSEnhancementUDS%
Bonn EEG95%95%0%70%
Bern–Barcelona81%87%6%11%
Fantasia85%95%10%18%
Ford Machinery A85%90%5%55%
House Twenty95%100%5%61%
PAF prediction76%84%8%36%
Worms71%79%8%46%
Table 5. Classification accuracy achieved with PE using UDS. Results achieved with PE with no UDS are also reported for comparison, as well as the gain in accuracy terms (difference between not using UDS and using UDS) and the UDS% used to maximise results.
Table 5. Classification accuracy achieved with PE using UDS. Results achieved with PE with no UDS are also reported for comparison, as well as the gain in accuracy terms (difference between not using UDS and using UDS) and the UDS% used to maximise results.
DatasetPE Without UDSPE with UDSEnhancementUDS%
Bonn EEG92%83%−9%70%
Bern–Barcelona63%84%21%18%
Fantasia68%88%20%19%
Ford Machinery A75%75%0%11%
House Twenty68%85%17%30%
PAF prediction82%90%8%35%
Worms69%77%8%29%
Table 6. Parameters used to maximise the classification accuracy results shown in Table 4 and Table 5.
Table 6. Parameters used to maximise the classification accuracy results shown in Table 4 and Table 5.
DatasetSlpEn Without UDSSlpEn with UDSPE Without UDSPE with UDS
Bonn EEG m = 4 , γ = 0.1 m = 7 , γ = 0.1 m = 3 m = 4
Bern–Barcelona m = 6 , γ = 1.0 m = 4 , γ = 0.1 m = 9 m = 8
Fantasia m = 5 , γ = 0.5 m = 7 , γ = 0.3 m = 4 m = 3
Ford Machinery A m = 9 , γ = 0.3 m = 5 , γ = 0.4 m = 5 m = 6
House Twenty m = 3 , γ = 0.1 m = 5 , γ = 0.1 m = 9 m = 4
PAF prediction m = 3 , γ = 0.1 m = 4 , γ = 0.4 m = 3 m = 3
Worms m = 5 , γ = 0.2 m = 6 , γ = 0.4 m = 7 m = 9
Table 7. Time reported in seconds for each dataset and representative percentages obtained with TS and UDS. The values correspond to the whole optimization grid search for m and γ of SlpEn.
Table 7. Time reported in seconds for each dataset and representative percentages obtained with TS and UDS. The values correspond to the whole optimization grid search for m and γ of SlpEn.
Dataset100%70% (TS/UDS)50% (TS/UDS)25% (TS/UDS)10% (TS/UDS)
Bonn EEG232.73143.26/203.25101.67/144.3959.94/73.6716.99/17.58
Bern–Barcelona180.09152.68/234.09126.87/212.4479.82/121.3734.37/38.74
Fantasia336.00207.14/223.51127.60/131.9843.29/47.009.29/9.85
Ford Machinery A4.193.25/3.932.55/2.861.52/1.550.63/0.63
House Twenty13.446.74/7.974.48/4.561.82/1.730.60/0.60
PAF prediction4.802.66/2.881.71/1.790.80/0.810.34/0.34
Worms27.1715.66/17.7010.52/11.425.02/5.372.29/2.41
Table 8. Time reported in seconds for each dataset and representative percentages obtained with TS and UDS. The values correspond to the whole optimization grid search for m of PE.
Table 8. Time reported in seconds for each dataset and representative percentages obtained with TS and UDS. The values correspond to the whole optimization grid search for m of PE.
Dataset100%70% (TS/UDS)50% (TS/UDS)25% (TS/UDS)10% (TS/UDS)
Bonn EEG30.8119.85/23.9614.10/18.438.30/9.642.85/2.90
Bern–Barcelona49.6535.33/50.0325.79/40.6213.49/17.544.94/5.08
Fantasia47.7029.61/29.8217.73/17.675.78/5.721.37/1.36
Ford Machinery A0.890.66/0.700.50/0.520.27/0.280.11/0.11
House Twenty2.031.05/1.260.72/0.800.31/0.320.11/0.11
PAF prediction0.760.47/0.480.32/0.320.15/0.150.06/0.06
Worms5.683.40/3.472.27/2.291.05/1.080.44/0.44
Table 9. Time reported in seconds for the Bonn EEG dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
Table 9. Time reported in seconds for the Bonn EEG dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual SlpEn calculations, keeping γ fixed at 1.0 and δ to 0.001.
m100%70%50%25%10%
30.840.600.420.220.09
40.880.620.440.250.12
50.990.690.490.310.19
61.190.810.550.450.28
71.620.940.680.690.35
82.361.261.110.940.40
93.661.751.561.310.42
Table 10. Time reported in seconds for the Bonn EEG dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
Table 10. Time reported in seconds for the Bonn EEG dataset, accounting for different values of m and representative percentages obtained with TS and UDS. The values correspond to individual PE calculations.
m100%70%50%25%10%
31.220.880.620.310.13
41.591.100.790.410.17
52.061.471.050.570.26
62.881.991.450.850.41
74.482.912.151.330.55
87.164.533.251.960.63
911.416.964.792.860.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moltó-Gallego, V.; Cuesta-Frau, D.; Kouka, M. Enhancing Classification Results of Slope Entropy Using Downsampling Schemes. Axioms 2025, 14, 797. https://doi.org/10.3390/axioms14110797

AMA Style

Moltó-Gallego V, Cuesta-Frau D, Kouka M. Enhancing Classification Results of Slope Entropy Using Downsampling Schemes. Axioms. 2025; 14(11):797. https://doi.org/10.3390/axioms14110797

Chicago/Turabian Style

Moltó-Gallego, Vicent, David Cuesta-Frau, and Mahdy Kouka. 2025. "Enhancing Classification Results of Slope Entropy Using Downsampling Schemes" Axioms 14, no. 11: 797. https://doi.org/10.3390/axioms14110797

APA Style

Moltó-Gallego, V., Cuesta-Frau, D., & Kouka, M. (2025). Enhancing Classification Results of Slope Entropy Using Downsampling Schemes. Axioms, 14(11), 797. https://doi.org/10.3390/axioms14110797

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop