Next Article in Journal
A Novel Moving Load Identification Method for Continuous Rigid-Frame Bridges Using a Field-Based Displacement Influence Line
Previous Article in Journal
Breast Cancer Detection Using a High-Performance Ultra-Wideband Vivaldi Antenna in a Radar-Based Microwave Breast Cancer Imaging Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications

Department of Software Engineering, Kaunas University of Technology, Studentų 50, LT-51390 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(11), 6021; https://doi.org/10.3390/app15116021
Submission received: 10 April 2025 / Revised: 20 May 2025 / Accepted: 24 May 2025 / Published: 27 May 2025
(This article belongs to the Section Applied Neuroscience and Neural Engineering)

Abstract

The ongoing exploration of brain–computer interfaces (BCIs) provides deeper insights into the workings of the human brain. Motor imagery (MI) tasks, such as imagining movements of the tongue, left and right hands, or feet, can be identified through the analysis of electroencephalography (EEG) signals. The development of BCI systems opens up opportunities for their application in assistive devices, neurorehabilitation, and brain stimulation and brain feedback technologies, potentially helping patients to regain the ability to eat and drink without external help, move, or even speak. In this context, the accurate recognition and deciphering of a patient’s imagined intentions is critical for the development of effective BCI systems. Therefore, to distinguish motor tasks in a manner differing from the commonly used methods in this context, we propose a fractal dimension (FD)-based approach, which effectively captures the self-similarity and complexity of EEG signals. For this purpose, all four classes provided in the BCI Competition IV 2a dataset are utilized with nine different combinations of seven FD methods: Katz, Petrosian, Higuchi, box-counting, MFDFA, DFA, and correlation dimension. The resulting features are then used to train five machine learning models: linear, Gaussian, polynomial support vector machine, regression tree, and stochastic gradient descent. As a result, the proposed method obtained top-tier results, achieving 79.2% accuracy when using the Katz vs. box-counting vs. correlation dimension FD combination (KFD vs. BCFD vs. CDFD) classified by LinearSVM, thus outperforming the state-of-the-art TWSB method (achieving 79.1% accuracy). These results demonstrate that fractal dimension features can be applied to achieve higher classification accuracy for online/offline MI-BCIs, when compared to traditional methods. The application of these findings is expected to facilitate the enhancement of motor imagery brain–computer interface systems, which is a key issue faced by neuroscientists.

1. Introduction

The human brain is a complex organ that still amazes scientists with its numerous unlocked secrets. Comprising billions of neurons, it plays essential roles in processing sensory data; regulating physiological functions; and enabling cognitive processes, emotional responses, and consciousness. With the interactions between humans and computer systems witnessing a noticeable increase, a global scientific effort aimed at the digital recognition of all human senses (visual, audio, tactile, odor, and taste) and voluntary control (arm movement, foot movement, and tongue movement) has emerged. The information generated in the human body can be processed by various lobes of the brain and consequently transformed into digital form using cameras, sensor-based tracking systems, and biofeedback sensors [1]. Brain–computer interface devices (BCIs) function as instruments designed to capture and decode neural activities. There are two types of BCIs: invasive (requiring the surgical implantation of electrodes) and non-invasive (using external sensors). One of the non-invasive methods used to process brain signals is called electroencephalography (EEG) [2]. This method’s popularity among researchers in the non-invasive BCI field arises from several key advantages: First, due to its high temporal resolution, EEG can detect rapid changes in brain activity; second, EEG is safe for users, requiring only the placement of electrodes on the scalp, making it suitable even for sensitive groups such as infants and children; third, it is cost-efficient when compared to other imaging techniques, such as magnetic resonance imaging (MRI); finally, EEG systems are portable and easy to use while still providing real-time data. Clinically, EEG is widely used to diagnose epilepsy, sleep disorders, brain tumors, and other neurological conditions [3]. When it comes to patients with movement disorders and physical disabilities, their common aim is to be able to perform motor tasks enabling them to eat or drink without external help. Being able to walk again and even talk is a luxury they have lost due to illness. This has resulted in the development of the motor imagery (MI) field, opening up the opportunity for these individuals to bypass their condition by controlling external assistive, mobility, and communication devices [4].
Motor imagery-based brain–computer interfaces (MI-BCIs) follow an eight-step framework to address key challenges [5]. The process begins with data acquisition, where electroencephalography (EEG) is commonly used to record brain activity. Electrodes—either gel-based or dry—capture signals while minimizing artifacts due to noise and movement [6]. MI training follows, in which users learn to imagine limb movements (e.g., left hand and right foot) without actual muscle activation, enhancing their proficiency in generating distinct neural patterns [7].
To ensure clean data, signal preprocessing is applied, incorporating bandpass filtering and artifact removal techniques such as independent component analysis (ICA) or common average referencing (CAR) [8,9]. Once preprocessed, the signals undergo feature extraction, which allows for the identification of relevant neural patterns using methods such as common spatial pattern (CSP) to enhance class discrimination [10], power spectral density (PSD) to analyze frequency bands [11], Wavelet Transforms for time–frequency analysis [12,13], and fractal dimensions to assess signal complexity [14]. To optimize performance further, channel and feature selection help in determining the most relevant electrodes, reducing the computational load while improving accuracy [15].
With a refined feature set, dimensionality reduction can be applied to handle high-dimensional data. Principal component analysis (PCA) eliminates redundancy, while Linear Discriminant Analysis (LDA) maximizes class separability [16,17,18]. In the classification stage, machine learning models such as Support Vector Machine (SVM) and decision trees help to interpret EEG patterns and translate them into commands [19,20]. Finally, performance evaluation ensures the reliability of the resulting system, using accuracy metrics or the Kappa Coefficient to measure the agreement between predicted and true labels [21,22].
Several challenges affect the performance of MI-BCIs. Enhancing the efficiency of an MI-BCI requires improving the classification accuracy, robustness, and user experience through preprocessing, feature selection, and dimensionality reduction. Reducing the calibration time is essential, as traditional BCIs require extensive training; transfer learning and subject-independent approaches help to minimize this phase [23,24,25,26,27,28,29]. Another major issue is BCI illiteracy, where some users fail to achieve reliable control due to physiological and cognitive factors, necessitating better feedback mechanisms and training protocols [30].
Further advancements focus on asynchronous MI-BCI, which allows for continuous operations but increases the risk of false positives [31]. Efforts to expand the number of commands move beyond binary control to provide more nuanced interactions [32]. Additionally, adaptive BCIs aim to adjust to variations in brain signals caused by fatigue, learning, or emotional states [33]. The development of online MI-BCIs enables real-time operations, ensuring immediate responses for controlling external devices [34]. Finally, training protocols play a crucial role in improving MI proficiency through the incorporation of effective instruction, feedback, progressive difficulty levels, and structured session scheduling [35].
The primary challenge in MI-BCI research lies in improving the overall performance; thus, in our prior work, we proposed an innovative hybrid pipeline that integrates feature extraction methodologies originally developed for emotion recognition into the motor imagery domain. Our implementation incorporated six distinct feature sets: statistical measures; wavelet analysis; Hjorth parameters; Higher-Order Spectra; fractal dimensions (including the Katz, Petrosian, and Higuchi methods); and a combined five-dimensional feature set integrating all of the abovementioned sets. The classifiers employed in this study include GSVM, CART, LinearSVM, and SVM with a polynomial kernel. The outcome underlined the highest accuracy when three-dimensional fractal dimensions were combined with LinearSVM, surpassing state-of-the-art results when compared to other, previous works [36].
The fractal dimension (FD) is a quantitative metric that assesses the structural complexity of an object or a phenomenon, particularly when it exhibits self-similarity across a specific spatial or temporal scale [14]. As a nonlinear metric, the FD captures the intricate complexity and self-similarity of neurophysiological signals within the time domain [37]. Cottone et al. [38] employed Higuchi’s fractal dimension to analyze neuronal dynamics, enabling the identification and classification of cortical brain regions. This approach contributes to a deeper understanding of the brain’s structural and functional integration. Di Ieva A et al. [39] used fractal geometry and provided a universal mathematical framework to describe neuronal architecture, connectivity, and pathological variations. Marino et al. [40] analyzed electrophysiological properties such as spectral features, fractal dimension, and entropy, then identified higher gamma-band synchronization and fractal complexity in perceptual networks (PNs) compared to higher cognitive networks (HCNs), highlighting their role in the functioning of the brain and potential disruptions occurring in neurological disorders. Porcaro et al. [41] conducted fractal dimension (FD) analysis of BOLD activity in resting-state networks (RSNs), then provided insights into the altered complexity of neural dynamics in chronic migraine (CM) patients compared to healthy controls. In a different study [42], Higuchi’s fractal dimension (HFD) was used, which showed superior sensitivity in differentiating minimally conscious state (MCS) and vegetative state (VS) patients compared to traditional linear EEG spectral power methods. Smits et al. [43] conducted fractal dimension (FD)—particularly Higuchi’s fractal dimension (HFD)—analysis of resting-state EEGs and revealed age-related changes in brain activity complexity. Borri et al. [44] used Higuchi’s fractal dimension (HFD) and the box-counting dimension (BCD) to provide insights into genetic heterogeneity across human populations and quantified genetic variations.
Fractal dimension (FD) has recently emerged as a promising yet underexplored feature for signal analysis in BCI systems. Umut Güçlü et al. [14] used Katz’s, Higuchi’s, and rescaled range methods for FD estimation, revealing that Katz’s method combined with fuzzy k-nearest neighbors (FKNNs) achieved the highest accuracy (85%), which further improved by 3% with the implementation of the time-dependent fractal dimension (TDFD). Liu et al. [45] experimented with ALS patients suffering from impaired sensorimotor function. The Grassberger–Procaccia fractal dimension (GPFD) and Higuchi’s fractal dimension (HFD) were applied to estimate the complexity of EEG signals, combined with Fisher’s criterion-based channel selection. Phothisonothai et al. [46] showcased that FD methods provide a computationally efficient alternative to traditional methods, such as moment statistics and regression analysis, for EEG waveform analysis. Moaveninejad et al. [37] introduced FD as a novel discriminative feature for the subject-independent classification of EEG signals, and their findings showed that fractal dimension features consistently outperformed event-related desynchronization (ERD) in classification accuracy observed in unilateral hand movement tasks.
In this study, new fractal dimension (FD) algorithms are introduced. The usage of different combinations of FD algorithms is an extension of our previous work [36], with the aim of obtaining better results. New FDs, such as box-counting [47], DFA [48], MFDFA [49], and correlation dimension [50], are incorporated with the previously used Katz [51], Petrosian [52], and Higuchi [53], in groups of three, and the following four classifiers are used: GSVM [54], CART [55], LinearSVM, and SVM with polynomial kernels [56]. These are added to the latter stochastic gradient descent (SGD) classifier [57], with the aim of comparing its performance against other linear models—in our case, LinearSVM—with regards to the trade-off in accuracy. While fractal dimension (FD) methods have demonstrated their effectiveness in capturing nonlinear and complex characteristics of EEG signals, their applicability has been limited by several factors. Primarily, individual FD methods such as Katz, Petrosian, and Higuchi each capture different aspects of signal complexity at varying scales, but no single method effectively captures the comprehensive multi-scale complexity inherent to EEG signals during motor imagery (MI) tasks. For example, the Katz FD predominately captures waveform length changes, while the Higuchi FD effectively quantifies time series complexity over multiple temporal scales, and the box-counting FD reveals spatial self-similarity. Consequently, employing only a single fractal dimension approach restricts the analysis to specific types of complexity, thus limiting the overall classification performance. To address this limitation, our work innovatively proposes combining multiple fractal dimensions into feature sets, integrating both traditional and novel FD methods. This multi-dimensional fractal analysis significantly expands the representation of EEG signal complexity, bridging the existing gap by capturing distinct characteristics simultaneously, thus enhancing the classification accuracy.
Section 2 presents the theoretical approaches utilized throughout this research. Details regarding data acquisition, the fractal dimension processing pipeline, the development environment, the evaluation framework, and parameter optimization are covered in Section 3. The results of the experiments are presented in Section 4, followed by an in-depth discussion in Section 5. Finally, Section 6 provides the study’s concluding remarks.

2. Background and Theory

This section introduces the fundamental concepts necessary to understand the methodology and objectives of this study. It explores essential aspects such as machine learning classifiers, feature extraction techniques, and preprocessing methods. Furthermore, it provides an overview of the key principles, theoretical foundations, and critical definitions to ensure a thorough understanding of the research framework.

2.1. Preprocessing Methods

This study incorporates three distinct preprocessing techniques: notch filtering, high-pass filtering, and CAR montage filtering. This section explores the underlying theoretical principles of these methods in detail.

2.1.1. Notch Filtering

The preprocessing of EEG data begins with the application of a notch filter [58], a widely used technique for removing specific unwanted frequencies from EEG signals. This filter creates a sharp, narrow attenuation zone—commonly referred to as a “notch”—at a predefined frequency. By adjusting parameters such as the sampling rate (Fs), powerline frequency (f0), and quality factor (Q), this filtering method effectively reduces signal interference. The digital representation of the notch filter is mathematically defined according to the transfer function H z , as outlined in Equation (1):
H z = Y z X z = m = 0 M b m z m 1 + n = 1 N a n z n
In this context, b m signifies the feedforward coefficients (numerator), whereas a n represents the feedback coefficients (denominator). The symbol M indicates the numerator’s order, while N specifies the denominator’s order. Furthermore, X z and Y z denote the Z-transforms of the input and output signals, respectively.

2.1.2. High-Pass Filtering

The subsequent step in EEG data preprocessing includes utilizing high-pass Butterworth filters (HPFs) [59], designed to allow EEG components with higher frequencies to pass while suppressing lower-frequency signals. HPFs are commonly employed to reduce low-frequency noise and retain essential high-frequency neural activities, making them particularly suitable for artifact removal and correcting electrode drift in scalp-based BCI systems. A high-pass Butterworth filter with order N and cutoff frequency ωc can generally be represented as shown in Equation (2):
C z = B z A z
In this equation, B z and A z denote the Z-transforms of the filter coefficients b and a, respectively. When employing a fourth-order high-pass Butterworth filter, the transfer function takes the form indicated by Equation (3):
C z = b 0 + b 1 z 1 + b 2 z 2 + b 3 z 3 + b 4 z 4 1 + a 1 z 1 + a 2 z 2 + a 3 z 3 + a 4 z 4
Configuring this filter involves determining the coefficients b and a based on a fourth-order Butterworth filter with a defined normalized cutoff frequency (ωc). Applying this filter to EEG signals is executed via multiplication in the Z-domain, as presented in Equation (4):
F i l t e r e d   S i g n a l z = C z E E G   S i g n a l z

2.1.3. CAR Montage Filtering

For the final stage of EEG data preprocessing, the common average reference (CAR) montage [60] is implemented. This technique, frequently applied in EEG signal processing, minimizes shared noise across channels by recalculating each channel’s reference based on the average of all channels. Consequently, neural activities become more distinct relative to this combined average, thereby improving the signal-to-noise ratio, and enhancing the overall data quality. Expressed mathematically, if X represents the EEG signal matrix, the CAR method is described by Equation (5):
C A R = X 1 N i = 1 N X i
In this context, C A R refers to the EEG data matrix following the application of the common average reference filtering. The original EEG signal matrix is indicated by X , with N representing the total number of EEG channels. Furthermore, X i denotes the EEG signal from the i-th channel.

2.2. Feature Extraction Techniques

This study utilizes seven distinct feature extraction techniques: namely, Katz, Petrosian, Higuchi, box-counting, MFDFA, DFA, and correlation dimension. This section provides an in-depth explanation of each method.

2.2.1. Katz

The Katz fractal dimension (KFD) [51] quantifies a signal’s complexity by analyzing how its structure expands across different scales. This approach assesses deviations from a smooth trajectory, taking into account both the signal’s overall length and the spatial extent it occupies, as shown in Equation (6):
F D Katz = ln n ln n + ln d L
In this context, l n indicates the natural logarithm, n represents the total number of data points in the time series, L denotes the cumulative length of the signal path, and d signifies the maximum distance between the initial point and any subsequent point within the signal.

2.2.2. Petrosian

The Petrosian fractal dimension (PFD) [52] quantifies the degree of irregularity or self-similarity in a signal by counting the sign changes in its first derivative. This measure is formally described by Equation (7):
F D P e t r o s i a n = log 10 n log 10 n + log 10 n n + 0.4 N δ
In this context, l o g 10 denotes the logarithm with base 10, n represents the total number of data points in the time series, and N δ corresponds to the count of sign changes in the derivative. This parameter provides significant insights into both the frequency characteristics and waveform complexity of the analyzed signal.

2.2.3. Higuchi

The Higuchi fractal dimension (HFD) [53] quantifies the intricacy or roughness of a signal by observing how its characteristics evolve through progressive downsampling, as outlined in Equations (8)–(10):
H F D = slope log x , log L
L k = 1 k m = 0 k 1 N 1 k N m 1 i = 1 N m k data m + i k data m + i 1 k
x = 1,2 , , k max
In this context, H F D indicates the computed Higuchi fractal dimension. The parameter L k denotes the average curve length computed over k subsets, where k is an integer specifying the interval used in the calculation of L k . The maximum value of k , referred to as k max , establishes the various scales for analyzing the time series. Additionally, N represents the total number of data points, and m ranges from 0 to k 1 . The curve length corresponding to a specific scale k and starting point m is represented by L m ,   k , with i indexing the summation over N m data points. Here, the term “data” refers specifically to the EEG time series under examination, whereas x is an integer sequence from 1 to k max , used alongside L to determine the slope.
In a previous study [36], the theoretical formulation of HFD remained valid, but an issue in the implementation of the algorithmic function caused negative values due to an incorrect sign convention in the log–log regression step. To address this, a refined function is developed in this study. The new implementation ensures mathematically valid fractal dimension estimation by correctly applying log–log scaling and regression without introducing sign errors. This refined method is used in all fractal feature combinations that include Higuchi FD, ensuring consistency across different experiments.

2.2.4. Box-Counting

The box-counting fractal dimension (BCFD) [47] is a well-established method for quantifying the self-similarity and complexity of a signal. BCFD serves as a powerful feature extraction technique. It effectively characterizes the intricate fluctuations in EEG signals by estimating their fractal properties, thereby capturing nonlinear dynamics associated with motor-related cortical activity. The box-counting dimension D is determined by covering the signal’s trajectory with a series of boxes of size ϵ and counting the number of occupied boxes, N (ϵ), as a function of ϵ. The fractal dimension is then obtained by computing the slope of the linear regression in a log–log plot of N (ϵ) vs. ϵ, as provided in Equation (11):
D = lim ϵ 0 log N ϵ log 1 / ϵ
In this context, ϵ represents the scale of the box size. In practical computations, a range of box sizes is chosen, and the relationship is approximated using a least-squares fit, shown in Equation (12):
D Δ log N ϵ Δ log 1 / ϵ
This measure provides an indication of the EEG signal’s complexity, making it a valuable feature for MI classification, as it captures the inherent self-affinity present in the brain’s electrical activity.

2.2.5. Multifractal Detrended Fluctuation Analysis

Multifractal detrended fluctuation analysis (MFDFA) [49] is a powerful method for quantifying the complexity and scale-invariant properties of non-stationary signals. MFDFA provides a robust framework for extracting nonlinear and multifractal characteristics from EEG signals. MFDFA captures both global and local fluctuations by computing the generalized Hurst exponent H q across different moments q . Thus, in MI tasks, it reveals the multifractal nature of brain activity. The method begins by constructing the integrated profile X i of a mean-centered signal x(i), as in Equation (13):
X i = k = 1 i x k x
In this context, x is the mean of the signal. The profile is then divided into non-overlapping segments of scale s , and a least-squares polynomial fit P v k is computed for each segment v . The root mean square (RMS) fluctuation is determined, as in Equation (14):
F v 2 s = 1 s k = 1 s X v 1 s + k P v k 2
The fluctuation function F q s for different orders q is then computed, as in Equation (15):
F q s = 1 N s v = 1 N s F v 2 s q / 2 1 / q , q 0
For q = 0 , the fluctuation function is given by the geometric mean, as in Equation (16):
F 0 s = exp 1 N s v = 1 N s log F v 2 s
The generalized Hurst exponent H q is estimated as the slope of the log–log plot of F q s vs. s , as in Equation (17):
H q = Δ log F q s Δ log s
The mean of H q across different q -values is often used as a single representative measure of fractal complexity.

2.2.6. Detrended Fluctuation Analysis

Detrended fluctuation analysis (DFA) [48] is a widely used technique for quantifying the self-similarity and long-range temporal correlations of non-stationary signals. DFA provides a valuable measure of signal complexity by evaluating the fractal properties of EEG time series. The method effectively distinguishes between different cognitive and motor tasks by capturing scale-invariant patterns in brain activity. The DFA process begins by constructing the integrated profile X i of the mean-centered EEG signal x(i), as in Equation (18):
X i = k = 1 i x k x
In this context, x is the mean of the signal. The integrated profile is then divided into non-overlapping segments of length s . In each segment v , a linear least-squares polynomial P v k is fitted to remove local trends, and the root mean square (RMS) fluctuation is computed, as in Equation (19):
F v 2 s = 1 s k = 1 s X v 1 s + k P v k 2
The overall fluctuation function F s is obtained by averaging all the segments, as in Equation (20):
F s = 1 N s v = 1 N s F v 2 s 1 / 2
N s is the number of segments. The DFA exponent α, representing the fractal scaling behavior of the signal, is estimated as the slope of the log–log plot of F s vs. s , as in Equation (21):
α = Δ log F s Δ log s
The DFA exponent α provides insight into the temporal correlation structure of the EEG signal, with varying values indicating different types of dynamics.

2.2.7. Correlation Dimension

The correlation dimension fractal dimension (CDFD) [50] is a widely used measure of fractal complexity. The CDFD serves as an effective feature extraction technique that quantifies the degree of self-similarity and chaotic behavior in EEG signals. It estimates the dimensionality of the signal’s attractor in the phase space, offering insights into the underlying neural dynamics during MI tasks. The CDFD is computed using the correlation sum C r , which measures the probability that two points in the reconstructed phase space are within a given distance r , as in Equation (22):
C r = lim N 1 N 2 i = 1 N j = 1 N Θ r | | X i X j | |
In this context, X i and X j are points in the embedded phase space, Θ is the Heaviside function (which counts pairs within the radius r ), and N is the number of points in the trajectory. The correlation dimension D 2 is estimated as the slope of the log–log plot of C r vs. r , as in Equation (23):
D 2 = lim r 0 Δ log C r Δ log r
In practical computation, a range of radius values r is chosen, and the relationship is approximated using a least-squares fit in the log–log domain. The CDFD provides a measure of EEG signal complexity, with higher values indicating greater fractal dimensionality and more complex neural dynamics.

2.3. Classification Algorithms

This study incorporates five machine learning models: LinearSVM, CART, GSVM, stochastic gradient descent (SGD), and SVM with a polynomial kernel. A detailed explanation of each classifier is provided in this section.

2.3.1. LinearSVM

The linear support vector machine (SVM) [56] serves as the first supervised learning model in this study. In the context of EEG analysis, the linear SVM aims to determine an optimal hyperplane that maximizes the margin between two different classes. This margin represents the smallest distance separating the hyperplane from the nearest data points, commonly known as support vectors. The linear SVM’s decision boundary is formally defined by Equation (24):
f x = w x + b
In this context, x denotes the feature vector derived from EEG signals, w is the weight vector perpendicular to the hyperplane, and b indicates the bias term, which adjusts the hyperplane’s position.

2.3.2. GSVM

A variant of the standard support vector machine, termed the GSVM, is a support vector machine using the Gaussian radial basis function (RBF) as a kernel [54]. This model serves as the second supervised learning model in this research. By integrating the Gaussian kernel—also referred to as the radial basis function—into the SVM framework, GSVM efficiently handles data that cannot be separated linearly (a frequent scenario in EEG signal analysis). The mathematical definition of the RBF Gaussian kernel is shown in Equation (25):
K x , x = exp γ | x x | 2
In this context, K represents the radial basis function (RBF) kernel, which is used to map data into a higher-dimensional space. The variables x and x belong to the input space, and γ serves as an adjustable parameter that influences the impact of individual training points on the decision boundary.
In the kernel-induced feature space, the SVM decision function can be written as in Equation (26):
f x = i = 1 N α i y i K x , x i + b
In this context, N signifies the total count of support vectors, α i represents the Lagrange multipliers obtained from the SVM optimization process, y i refers to the training data class labels, and b is the bias term adjusting the decision boundary’s position.

2.3.3. CART

The classification and regression tree (CART) [55] algorithm is employed for the third supervised learning model, which harnesses decision trees for classification and regression purposes. CART has been widely adopted in EEG studies for identifying distinct mental states and detecting patterns linked to neurological conditions. The algorithm operates following these primary steps:
Given EEG features X and a target variable Y , the algorithm proceeds as follows:
  • Initiate with the root node containing all instances.
  • Terminate if all instances have identical Y ; otherwise, continue.
  • Select the feature x i and threshold θ that minimize the impurity:
    x i , θ = a r g m i n x , t Impurity X , Y , x , t
  • Divide the node into two child nodes:
    X left = { x X | x i θ }
    X right = { x X | x i > θ }
  • Recursively repeat steps 2–4 for X left   and   X right .
  • Terminate when the maximum tree depth is reached or further splits do not significantly enhance the impurity reduction.
  • In this context, X represents the set of input variables, which consist of EEG features, while Y denotes the target variable, such as the type of brain activity being classified. Each individual EEG feature is denoted as x i , and θ is the threshold for splitting nodes. The measure of homogeneity within the nodes after a split, referred to as impurity X , Y , x , a n d   t , can be assessed using Gini impurity, entropy, or another relevant metric. The dataset is divided into two subsets: X left , containing instances where x i is less than or equal to θ , and X right , which includes instances where x i is greater than θ . To minimize impurity, the optimal values of x and t are determined using a r g min   x , t .

2.3.4. SVM Polynomial

A polynomial kernel-based support vector machine (SVM) [56] is employed as the fourth supervised learning classifier. This model effectively handles nonlinear relationships between EEG features and their categories by projecting data into a higher-dimensional space using a polynomial kernel. The kernel function is mathematically described in Equation (27):
K x , x = γ x x + r d
In this context, K x , x represents the kernel function, which quantifies the similarity between two feature vectors. The variables x and x correspond to input feature vectors extracted from EEG signals. The parameter γ serves as a scaling factor, adjusting the impact of different features within the kernel function. Additionally, r acts as a constant that balances the contribution of both lower-order terms and higher-order terms. To determine the polynomial degree, the value of d is used, which influences the complexity of the decision boundary.
Within the SVM framework utilizing a polynomial kernel, the decision function is formulated in Equation (28):
f x = i = 1 N α i y i K x i , x + b
In this context, N represents the number of support vectors chosen during training, while α i denotes the Lagrange multipliers derived from solving the SVM optimization problem. The variables y i correspond to the class labels assigned to the training samples, and x i refers to the support vectors extracted from the training dataset. Finally, b serves as the bias term or intercept, adjusting the decision boundary accordingly.

2.3.5. SGD

Finally, a stochastic gradient descent (SGD) classifier [57] is employed as the final supervised learning classifier. This is a widely recognized linear classification method. SGD leverages iterative updates driven by randomly sampled subsets of the training data, making it particularly effective for large-scale machine learning tasks. SGD provides an efficient method for training linear classifiers (such as support vector machines or logistic regression), particularly when dealing with high-dimensional feature spaces, such as fractal-based EEG features. SGD minimizes the given loss function L w . In the specific case of a SVM that employs the hinge loss, the optimization objective can be given, as in Equation (29):
L w = 1 N i = 1 N max 0,1 y i w T x i + λ 2 | | w | | 2
In this context, w is the weight vector defining the decision boundary, x i represents the input EEG feature vector, y i ∈ {−1, 1}, which is the class label, and λ is the regularization parameter controlling the model complexity. The weight vector w is updated at each iteration using the gradient descent update rule. η is the learning rate, controlling the step size of the updates, as in Equation (30):
w w η L w + λ w

2.4. Cross-Validation

Cross-validation accuracy provides a standard approach for evaluating a machine learning model’s effectiveness [61]. This technique partitions the dataset into k-folds, ensuring each fold is used for both training and testing. For instance, let the dataset D be split into k portions, with D i for i = 1…k. The model is trained on all subsets except D i and then evaluated on D i to yield a performance measure S i . The overall model performance is then calculated as the average of these individual scores, illustrated mathematically in Equation (31):
C V = 1 k i = 1 k S i
In this context, C V represents the cross-validation score, which evaluates the model’s performance. The parameter k denotes the number of folds in the cross-validation process. The variable S i represents the performance score obtained when evaluating the i-th subset D i . Depending on the evaluation objectives, this performance score can be quantified using various metrics, including accuracy (as utilized in this research), mean squared error, F1-score, recall, and precision.

3. Materials and Methods

This section details the cross-validation assessment strategy employed in this research, along with a comprehensive overview of the development environment used to assess the system’s performance. It further elaborates on the fractal dimension combinations of the proposed pipeline. Additionally, a summary of the EEG data acquisition system utilized in the study is included, in order to establish context for the subsequent analysis. Furthermore, this section highlights the parameter optimization and fine-tuning processes used during the study, explaining their roles in enhancing the overall system performance and influencing the final outcomes.

3.1. EEG Data Acquisition

The BCI Competition IV 2a dataset [62] is utilized during this study. It originates from recordings distributed in GDF format, a widely recognized standard for biomedical signals. However, for practical purposes, many studies (including this one) use the equivalent .mat (MATLAB R2022a files) format. The latter is widely used as a benchmark for MI-based brain–computer interfaces, largely due to its standardized protocol, balanced class distribution, and thorough documentation.
This set comprises contributions from nine participants, each performing four distinct motor imagery (MI) tasks—left hand, right hand, both feet, and tongue—during two separate sessions on different days. In each session, the participant completed six runs. Each run comprised 48 trials, yielding 288 trials per session and a total of 576 trials across both sessions for each subject. The trials typically lasted around 7 to 8 s, and annotations were present to mark trial onsets and the specific MI cue (one of the four classes). The sequence of prompts and rest periods conforms to a predefined structure, which typically includes an initial interval of eyes open, followed by eyes closed, and a brief phase of eye movements. As a result, the entire sessions varied in duration from approximately 2016 to 2304 s (see Figure 1 for the overall timing diagram).
At a sampling rate of 250 Hz, for every participant, 25 channels were recorded. Of these, 22 channels are dedicated to electroencephalography (EEG), placed according to the international 10–20 system, and centered around motor-relevant scalp sites (C3, Cz, C4, etc.). The remaining three channels captured electrooculography (EOG) signals. In applications focused on MI classification, EOG channels are often excluded to minimize artifacts unrelated to motor cortex activity (see Figure 2 for the montage representation).
To maintain the data quality, the EEG signals were optimized through the application of a bandpass filter ranging from 0.5 Hz to 100 Hz, in addition to employing a 50 Hz notch filter specifically designed to remove interference caused by power line noise.
While the original dataset is sampled at 250 Hz—a rate sufficient for capturing critical frequency bands (e.g., alpha and beta) involved in motor imagery—it is worth noting that using a higher sampling rate, such as 512 Hz, could allow for finer temporal resolution and the retention of higher-frequency components (e.g., gamma activity). However, this would also result in significantly larger data volumes and increased computational load, since motor imagery primarily balance between data resolution and processing efficiency.

3.2. Fractal Dimension Combination Pipeline

In this study, we ran experiments across 45 distinct Jupyter Notebooks (v6.5.3), leveraging a set of nine fractal dimension feature combinations paired with five classification algorithms (see Figure 3 for the followed approach and Table 1 for the list of combinations).
Each scenario was evaluated for nine different subjects and encompassed four BCI IV 2a motor imagery tasks, including left hand, right hand, foot, and tongue. The classifiers used in these trials were LinearSVM, CART, GSVM, SVM with a polynomial kernel, and SGD.

3.2.1. Signal Preprocessing and Artifact Removal

The preprocessing approach began by eliminating powerline interference using a notch filter [58], a crucial step in removing a frequent artifact from EEG recordings. This ensured that subsequent filters and spatial transformations were not confounded by unwanted noise. Next, a high-pass filter [59] removed low-frequency drifts, stabilizing the baseline by filtering out gradual fluctuations. To further increase the signal-to-noise ratio, a common average reference (CAR) montage [60] was applied, incorporating the original reference channel to maintain the data’s full rank [63]. Verification of the rank before and after CAR confirmed that no dimensionality was lost. Following these steps, the common spatial pattern (CSP) algorithm [10] was introduced to uncover spatial filters that highlighted the largest variance differences among the motor imagery conditions. A separate spatial filter [64] was then used to accentuate the relevant regions corresponding to each task. In the final stage, a band-pass filter [65] was used to isolate the principal EEG rhythms, completing the preprocessing sequence.

3.2.2. Feature Extraction

The feature extraction phase employed nine distinct combinations of fractal dimension (FD) metrics, drawn from a core set of seven FD methods, each capturing unique aspects of EEG signal complexity.
The Katz FD [51] estimates how the waveform evolves over multiple scales by assessing the ratio between the total traversal distance and the signal’s maximum vertical (or horizontal) extent, revealing complexities tied to motor-related activity. The Petrosian FD [52] highlights rapid signal fluctuations by counting sign changes in the first derivative, linking them to essential cortical chaos. The Higuchi FD [53] decomposes signals at various downsampled scales to track their effective length, a process sensitive to temporal dynamics. A refinement was made to the HFD function to correct an issue identified in the previous implementation. The earlier function occasionally returned negative values due to an incorrect sign convention in the log–log regression step. The updated function corrects this by properly computing the log–log scaling and ensuring positive HFD values. The box-counting FD [47] overlays a grid of boxes, quantifying self-similar, nonlinear structures frequently associated with motor tasks. The MFDFA FD [49] expands upon standard fluctuation analysis by measuring multifractal properties across multiple time scales, reflecting the diverse variability within EEG signals. The DFA FD [48] removes underlying drifts to reveal long-range correlations in non-stationary data, revealing organized patterns often hidden in cortical signals. The correlation dimension FD [50] quantifies chaotic behavior by examining how points cluster in increasing dimensional embeddings, capturing complex dynamics in EEG during motor imagery.
By combining these seven FD methods into nine feature sets, we gain a comprehensive view of the signal’s fractal properties, from delicate textural details to larger chaotic structures, eventually enhancing EEG classification in the motor imagery context.

3.2.3. Classification and Performance Evaluation

The classification phase included five models: LinearSVM, CART, GSVM, SVM with a polynomial kernel, and SGD. These classifiers were implemented to cover a wide range of both linear and nonlinear EEG classification techniques.
LinearSVM has been widely recognized for its ability to efficiently handle high-dimensional datasets, a characteristic relevant to EEG signals [56]. GSVM, building on the standard SVM, introduces a nonlinear kernel that can capture complex dependencies in the data [54]. CART is a decision tree technique known for its interpretability and capacity to model nonlinear relationships, although it may be prone to overfitting when dealing with a large number of EEG-derived features [55]. SGD was added to this ensemble due to its computational efficiency and effectiveness in scenarios involving incremental or large-scale learning, making it a practical choice in scenarios characterized by rapidly changing experimental conditions [57]. Finally, the polynomial kernel SVM extends the linear SVM framework to handle more complex structures in EEG signals, offering additional flexibility by mapping data into higher-order feature spaces [56].

3.3. Development Environment

This study’s workflow was implemented using Python (version 3.9.16), providing a robust environment for data preprocessing, feature extraction, and machine learning classification. Code was developed and tested primarily through the Jupyter Notebook and Jupyter Lab platforms, enabling interactive data exploration, rapid prototyping, and efficient documentation. The environment and software dependencies were managed using the CLI version of Conda, facilitating straightforward installation, maintenance, and reproducibility of experiments. Visual Studio Code (v1.100.2, Universal) was utilized as the primary integrated development environment (IDE), offering features such as efficient debugging, easy version control integration, and simplified workflow management. Computational tasks were executed locally on a MacBook Air equipped with an Apple M1 processor and 16 GB of RAM (Apple Inc., Cupertino, CA, USA), ensuring adequate computational resources to efficiently handle EEG signal processing and extensive classifier training tasks.

3.4. Cross-Validation Evaluation Strategy

In order to strengthen the reliability of our model evaluations, we adopted a 5-fold cross-validation approach [61]. This technique is especially relevant for EEG-related research, given the considerable inter-subject and inter-session variability found in datasets such as BCI Competition IV 2a. By partitioning the data into multiple subsets, cross-validation permits a detailed examination of classifier performance, whether using LinearSVM, CART, GSVM, SVM with a polynomial kernel, or SGD, while mitigating overfitting risks and producing stable accuracy estimates.
Implementing k-fold cross-validation (with k = 5 ) means that about 80% of the data is used to train the model in each fold, while the remaining 20% serves as a test set. Every sample ultimately appears in a test split exactly once and in training splits the rest of the time. This process ensures that the performance metrics reflect diverse data partitions, thereby encouraging balanced generalization across the entire dataset.
Importantly, the BCI Competition IV 2a dataset is inherently balanced in terms of class distribution. Each of the four motor imagery tasks (left hand, right hand, feet, and tongue) is represented equally, with 12 trials per class per run. This consistent class structure eliminates the need for data rebalancing techniques. Consequently, the classification performance is not influenced by class imbalance.

3.5. Parameter Optimization and Fine-Tuning Process

The parameters in this study were adjusted and fine-tuned to optimize performance across different experimental scenarios. A detailed summary of these parameters is provided in Table 2.
Multiple iterations were conducted to determine the optimal parameters for this study. The process began by refining the preprocessing filters, selecting the most effective fractal dimension factors, and enhancing the performance of the classifiers used in the experiments.
In the initial processing phase, we applied a series of filters. First, a notch filter was configured with a center frequency f0 of 50 Hz to remove line noise and a quality factor Q of 30 to determine its bandwidth. Next, a high-pass filter was established at 0.5 Hz, ensuring that frequencies below this threshold were neglected. As a third step, a common average reference montage was used, with its filtered axis set to 1 and the dimension parameter enabled. This configuration guarantees that all channels share a uniform reference, and that the data’s original structure remains maintained. Fourth, a total of 24 CSP filters spanned each class and temporal segment of interest. We selected a Butterworth filter (order 2) to target a maximum frequency of 40 Hz. The filter bandwidths (bw) were defined by the list [2, 4, 8, 16, 32] Hz—a set chosen to isolate different EEG bands, such as alpha and beta. In addition, we adapted the start and end times for processing by multiplying them with the sampling rate (250 Hz), thus enabling CSP to extract features within multiple overlapping time frames.
Following these steps, we optimized additional fractal dimension feature extraction parameters, such as Higuchi, box-counting, correlation dimension, multifractal detrended fluctuation analysis (MFDFA), and detrended fluctuation analysis (DFA). For the Higuchi feature set, the maximum number of intervals (kmax) was 10, indicating the scaling factor used in creating subsequences. For the box-counting approach, we used 10 different scaling levels, with minimum and maximum box sizes set to 1 and 874, respectively. In the correlation dimension method, we selected an embedding dimension of 5, a delay of 5, and a set of radii defined by l o g s p a c e ( 3 , 0 , 30 ) . Additionally, a small epsilon value of 1 × 10 12 was included to safeguard the numerical stability. As for MFDFA, the chosen q-values ranged from −5 to 5, while the minimum and maximum scales were set to 4 and 437, respectively, with a scaling ration of 2.0. Finally, for DFA, we used the same minimum and maximum scales (4 and 437) and scale ration (2.0), ensuring a uniform multi-scale analysis across these fractal dimension estimators.
Finally, we fine-tuned the parameters of five classifiers: LinearSVM, CART, GSVM, SVM with a polynomial kernel, and SGD and validated their performance with 5-fold cross-validation. LinearSVM was configured with C = 0.1, intercept_scaling = 1, hinge loss, a limit of 1000 iterations, a one-vs-rest (ovr) multi-class strategy, an L2 penalty, random_state = 1, and tol = 0.00001. GSVM employed C = 20, an RBF kernel, a kernel degree of 10, γ = auto, coef0 = 0.0, tol = 0.001, a cache size of 10,000, max_iter = −1, and an ovr decision function. For CART, we set a max_depth = 10, random_state = 1, the Gini criterion, a best splitter, min_samples_split = 2, and min_samples_leaf = 1. The SGD classifier used a hinge loss, an L2 penalty, max_iter = 1000, tol = 0.001, α = 0.1, and random_state = 1. We ended with the polynomial kernel SVM that adopted a poly kernel (degree of 10), γ = auto, coef0 = 0.0, tol = 0.001, a cache size of 10,000, max_iter = −1, and an ovr decision scheme.

4. Results

In this section, we present all the obtained results and discuss their significance. We compare different fractal dimension feature sets, evaluate how each classifier performs, and relate our findings to previous studies.

4.1. Comparison Between Fractal Features Combinations

This subsection compares the performance of the fractal feature combinations presented in Table 3. The classification performance of different fractal feature combinations is evaluated across multiple subjects. The results indicate that linear support vector machine (LinearSVM) consistently achieves the highest classification accuracy across all subjects and feature sets. With a mean accuracy ranging between 78.57% and 79.16%, LinearSVM emerges as the most stable and reliable classifier for EEG-based motor imagery classification. Other classifiers, such as GSVM and SVM with a polynomial kernel (SVM Poly), present competitive performance but fall slightly behind LinearSVM in terms of mean accuracy. The SGD classifier achieves lower accuracy when compared to the other SVM-based models. However, CART decision tree classifiers yield the lowest classification results, with a mean accuracy of approximately 58–59%, which suggests poor generalization across the subjects.
Among the various features sets tested, the combinations of Katz (KFD), box-counting (BCFD), and correlation dimension (CDFD) yields the highest mean accuracy of 79.16% when used with LinearSVM. This suggests that integrating multiple fractal dimension measures enhances the discriminative power of EEG features. Other high-performing feature sets include KFD vs. PFD vs. CDFD and KFD vs. CDFD vs. MFDFA, both yielding mean accuracies above 79%, indicating that KFD is a crucial feature for EEG-based classification. In contrast, feature sets integrating detrended fluctuation analysis (DFA) show slightly weaker performance, suggesting that DFA alone may not be as effective in capturing the complexity of motor imagery EEG signals.
The Katz’s fractal dimension (KFD), Petrosian’s fractal dimension (PFD), and Higuchi’s fractal dimension (HFD) combination continued to perform well among the other feature sets. However, following the correction of the Higuchi fractal dimension (HFD) function, the accuracy for this combination slightly decreased from 79.04% to 78.95%. This minor reduction in accuracy confirms that the original function’s sign error did not substantially distort the results; however, its correction ensures mathematically valid feature extraction.
The results further reveal considerable inter-subject variability in classification accuracy. Specifically, subjects 1, 3, 7, 8, and 9 consistently achieve a high classification performance (85–93%), while subjects 2, 4, and 6 present significantly lower accuracy (42–70%) across multiple classifiers and feature sets. This difference suggests potential challenges related to subject-dependent EEG signal characteristics, which may be caused by differences in brain activity or signal quality.
Given these findings, LinearSVM is the preferred classifier due to its superior performance and stability. Furthermore, feature sets incorporating KFD, BCFD, and CDFD appear to be the most effective for motor imagery classification, highlighting the importance of combining multiple fractal dimension measures.

4.2. Comparison Between Classifiers

The comparison table of the related fractal dimension combinations compared classifier-wise is presented in Table 4. The results in the latter show that LinearSVM and GSVM achieve the highest classification accuracy overall. LinearSVM performs best, with an average accuracy of 78.88%, slightly outperforming GSVM (78.23%). This is consistent with findings in EEG-based machine learning, where support vector machines (SVMs), especially those with linear or Gaussian kernels, are known for their robustness and ability to handle complex neural signals. In contrast, CART (decision trees) shows the lowest performance, with an average accuracy of 58.83%, likely due to its tendency to overfit when handling high-dimensional or nonlinear data, a common challenge in EEG classification.
Among the different fractal dimension (FD) feature sets, those including Katz fractal dimension (KFD) consistently rank among the best-performing combinations. For example, the “KFD vs. BCFD vs. CDFD” feature set achieves the highest accuracy (79.16%) with LinearSVM. Other classifiers, including GSVM, SVM with a polynomial kernel, and SGD, also tend to perform better when KFD features are included. This suggests that KFD effectively captures EEG complexities and provides useful information for motor imagery classification. While the accuracy differences among FD combinations are not large, the slight improvements seen with KFD-based sets indicate that KFD extracts unique signal features that other FD methods may miss.
Regarding classifier performance, SGD ranks in the mid-range (67.95%), performing better than CART but remaining behind SVM models. SVM with a polynomial kernel maintains a stable accuracy (75.37%), indicating that nonlinear kernels can effectively model EEG data without requiring excessive computational resources. Overall, these results highlight the importance of both classifier selection and feature choice. Using a well-optimized SVM and integrating KFD-based fractal dimension features leads to a stronger performance in EEG-based motor imagery classification.

4.3. Comparison with Previous Works

This section compares our proposed method with several previously published approaches in motor imagery (MI) classification. The comparison focuses on mean classifier accuracy, a key measure of how well different algorithms classify EEG signals related to motor imagery tasks. Using these results, we ensure an objective evaluation of how our method performs relative to existing innovative techniques.
We include results from various established methods such as SincNet [66], HSS-ELM [67], IFNet [68], TSLDA [69], TSFBCSP-GA [70], FBRTS [71], TWSB [72], multi-scale CSP/Riemannian approaches [73], and the hybrid ER/MI pipeline (KFD vs. PFD vs. HFD) [36]. Additionally, we compare these methods with the novel fractal dimension (FD) combination developed in this study (KFD vs. BCFD vs. CDFD). This allows us to examine both classical and more recent approaches, providing a deep perspective on improvements in MI classification and highlighting the strengths of different methodologies.
To ensure consistency, we round the performance metrics in the current works results table to the nearest tenth of a percent. Furthermore, Figure 4 and Figure 5 present a visualization of the classification results, showing how MI research has evolved over time. This study’s results contribute to the field with a new state-of-the art result reaching 79.2% mean accuracy. A comparative analysis of the proposed fractal dimension combinations and previously published methods is presented in Table 5.
Our results show a clear discrepancy in classification accuracy for certain subjects, particularly A02 and A06, where the performance in several cases approaches or falls below chance-level performance. This phenomenon has been consistently observed in prior BCI studies and is commonly attributed to BCI illiteracy, where specific users fail to generate distinct and repeatable neural patterns during motor imagery tasks. This may arise due to a range of physiological, cognitive, or psychological factors, such as low signal-to-noise ratio, poor engagement, or inconsistencies in mental strategy.

5. Discussion

In this section, we delve into this study’s findings, offering a thorough interpretation of their implications and identifying potential directions for future research.
During our previous research [36], the selection of the six distinct feature sets—statistical, wavelet analysis, Hjorth parameters, Higher-Order Spectra, fractal dimensions, and a combined multi-dimensional feature set—was motivated by their proven effectiveness in capturing different aspects of EEG signal complexity in prior emotion recognition research. Specifically, statistical features and wavelet analysis capture general signal trends and time–frequency characteristics, respectively; Hjorth parameters offer insights into activity, mobility, and complexity; Higher-Order Spectra provide information on signal nonlinearity and interactions across frequencies; and fractal dimensions quantify complexity and self-similarly within EEG signals. Combining these diverse feature extraction techniques aimed to leverage complementary strengths, enhancing the overall disseminative power of the classification pipeline.
This work expands on the existing research by integrating different fractal dimension (FD) combinations, including Katz, Petrosian, Higuchi, box-counting, DFA, MFDFA, and correlation dimension, into a single framework for motor imagery (MI) classification. As noted in the Introduction, FDs capture nonlinear characteristics in EEG data, potentially uncovering complex signal details that simpler features may miss. Our expanded set of FD methods reinforces the idea that combining distinct fractal measures enhances the representation of brain dynamics, thereby improving classification performance.
Our results also show that LinearSVM repeatedly achieves the highest accuracy, followed by Gaussian SVM (GSVM) and SVM with a polynomial kernel (SVM Poly). In contrast, decision trees (CART) were found to underperform, likely due to overfitting on fractal dimension-based EEG signals. Adding stochastic gradient descent (SGD) provided a useful comparison against LinearSVM, demonstrating lower performance despite its effectiveness in dealing with high-dimensional data. These findings underscore the central role of the careful choice of machine learning models; even strong feature sets can yield weaker results without an appropriate classifier.
The refinement of the Higuchi fractal dimension (HFD) function serves as an essential improvement to this study’s methodology; it is addressed in the present study, ensuring that extracted fractal dimensions align with their expected mathematical properties. The corrected implementation did not substantially alter the ranking of feature sets, as the Katz, Petrosian, and Higuchi combination remained among the top-performing configurations. However, the accuracy dropped slightly (from 79.04% to 78.95%) due to the refined feature representation. The latter affected the overall accuracy of this combination considering the other methods in Table 5 and decreased the outcome from 79.1% to 79.0%.
To clearly present the contribution of the newly introduced fractal dimension methods, we compared classification performance using feature sets consisting solely of traditional fractal dimensions (Katz, Petrosian, and Higuchi) against those integrating the new algorithms (box-counting, DFA, MFDFA, and correlation dimension). The baseline combination of traditional FDs (KFD vs. PFD vs. HFD) yielded a mean accuracy of 78.95%. When integrating the newly proposed FD methods (e.g., box-counting and correlation dimension), the mean accuracy notably improved to 79.16% (KFD vs. BCFD vs. CDFD). This incremental enhancement illustrates the specific value that new fractal dimension methods add by capturing additional complexity parts of EEG signals previously overlooked by traditional FD methods alone.
Although our proposed method demonstrated enhanced classification accuracy through the integration of multiple fractal dimension (FD) techniques, an important practical limitation is the increased computational load inherent in using combined FD methods. Each fractal dimension computation (e.g., Box-counting, DFA, MFDFA, and correlation dimension) significantly adds to the complexity of feature extraction due to their iterative and multi-scale calculation processes, particularly when processing large-scale EEG datasets or aiming for real-time classification. This complexity may impose constraints on the real-world applicability of our pipeline, mainly in online and real-time brain–computer interface (BCI) scenarios where computational efficiency is crucial.
While the findings generated from this study confirm the effectiveness of multi-dimensional fractal representations, different fractal dimension methods remain unexplored in the context of motor imagery BCIs. Future research should investigate alternative fractal complexity measures, which may offer complementary insights into the complexity of EEG signals.

6. Conclusions

In this study, nine different fractal dimension configurations were introduced to the four-class BCI Competition IV-2a dataset using seven distinct FD methods: Katz, Petrosian, Higuchi, box-counting, MFDFA, DFA, and correlation dimension. Each combination/feature set was classified using five different machine learning models: LinearSVM, CART, GSVM, SVM with a polynomial kernel, and SGD. The outcomes presented in this work confirm the value of multi-method fractal dimension (FD) features in interpreting motor imagery signals. The novel contribution of the current study is underlined by the highest accuracy resulting from the Katz vs. box-counting vs. correlation dimension FD combination (KFD vs. BCFD vs. CDFD) classified by LinearSVM, which achieved an accuracy of 79.2%. This result surpassed the most recent method in the field—that is, the TWSB method (which achieved a score of 79.1%). Despite the addition of the stochastic gradient descent (SGD) model, linear support vector machine (LinearSVM) still dominated and outperformed all the other classifiers. The proposed FD-based pipeline has a level of performance that situates it alongside modern algorithms. This reinforces the idea that fractal dimension-based techniques offer a powerful approach to enhance motor imagery tasks utilizing the underlying dynamics of EEG signals for brain–computer interfaces.

Author Contributions

V.J. supervised the research, analyzed the results, provided feedback, revised the draft, and approved the final version of the article. A.F.M. implemented the research, executed experimental work, analyzed the results, and revised the final article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in the BCI Competition IV 2a: https://www.bbci.de/competition/iv/, accessed on 20 January 2025 [62]. The code and Jupyter Notebooks can be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALSAmyotrophic Lateral Sclerosis.
BCFDBox-counting fractal dimension.
BCIBrain–computer interface.
CARCommon average reference.
CARTClassification and regression tree.
CDFDCorrelation dimension fractal dimension.
CLICommand Line Interface.
CMChronic migraine.
CSPCommon spatial pattern.
CVCross-validation.
DFADetrended fluctuation analysis.
EEGElectroencephalography.
EOGElectrooculography.
ERDEvent-related desynchronization.
FDFractal dimension.
FKNNFuzzy k-nearest neighbors.
GDFGeneral Data Format.
GPFDGrassberger–Procaccia fractal dimension.
GSVMGaussian support vector machine.
HCNsHigher cognitive networks.
HFDHiguchi fractal dimension.
HPFsHigh-Pass Butterworth Filters.
ICAIndependent component analysis.
IDEIntegrated development environment.
KFDKatz fractal dimension.
LDALinear Discriminant Analysis.
MATMATLAB file.
MCSMinimally conscious state.
MFDFAMultifractal detrended fluctuation analysis.
MIMotor imagery.
MI-BCIMotor imagery brain–computer interface.
MRIMagnetic resonance imaging.
PCAPrincipal component analysis.
PFDPetrosian fractal dimension.
PNsPerceptual networks.
PolyPolynomial.
PSDPower spectral density.
RBFRadial basis function.
RSNsResting-state networks.
SGDStochastic gradient descent.
SVMSupport vector machine.
TDFDTime-dependent fractal dimension.
VSVegetative state.

References

  1. Sourina, O.; Wang, Q.; Liu, Y.; Nguyen, M.K. Fractal-based brain state recognition from EEG in human-computer interaction. Biomed. Eng. Syst. Technol. 2013, 4, 258–272. [Google Scholar]
  2. Baillet, S.; Mosher, J.C.; Leahy, R.M. Electromagnetic brain mapping. IEEE Signal Process. Mag. 2001, 18, 14–30. [Google Scholar] [CrossRef]
  3. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  4. McFarland, D.J.; Wolpaw, J.R. EEG-based brain–computer interfaces. Curr. Opin. Biomed. Eng. 2017, 4, 194–200. [Google Scholar] [CrossRef]
  5. Singh, A.; Hussain, A.A.; Lal, S.; Guesgen, H.W. A comprehensive review on critical issues and possible solutions of motor imagery-based electroencephalography brain-computer interface. Sensors 2021, 21, 2173. [Google Scholar] [CrossRef]
  6. Martini, M.L.; Oermann, E.K.; Opie, N.L.; Panov, F.; Oxley, T.; Yaeger, K. Sensor modalities for brain-computer interface technology: A comprehensive literature review. Neurosurgery 2020, 86, E108–E117. [Google Scholar] [CrossRef] [PubMed]
  7. Jeunet, C.; Jahanpour, E.; Lotte, F. Why standard brain-computer interface (BCI) training protocols should be changed: An experimental study. J. Neural Eng. 2016, 13, 671–679. [Google Scholar] [CrossRef]
  8. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  9. Islam, M.K.; Rastegarnia, A.; Yang, Z. Methods for artifact detection and removal from scalp EEG: A review. Neurophysiol. Clin. Neurophysiol. 2016, 46, 287–305. [Google Scholar] [CrossRef]
  10. Lotte, F.; Guan, C. Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms. IEEE Trans. Biomed. Eng. 2010, 58, 355–362. [Google Scholar] [CrossRef]
  11. Samuel, O.W.; Geng, Y.; Li, X.; Li, G. Towards efficient decoding of multiple classes of motor imagery limb movements based on EEG spectral and time domain descriptors. J. Med. Syst. 2017, 41, 194. [Google Scholar] [CrossRef] [PubMed]
  12. Gao, Z.; Wang, Z.; Ma, C.; Dang, W.; Zhang, K. A wavelet time-frequency representation-based complex network method for characterizing brain activities underlying motor imagery signals. IEEE Access 2018, 6, 65796–65802. [Google Scholar] [CrossRef]
  13. Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain-computer interface: A review. Array 2019, 1, 100003. [Google Scholar] [CrossRef]
  14. Güçlü, U.; Güçlütürk, Y.; Loo, C.K. Evaluation of fractal dimension estimation methods for feature extraction in motor imagery-based brain-computer interface. Procedia Comput. Sci. 2011, 3, 589–594. [Google Scholar] [CrossRef]
  15. Feng, J.K.; Jin, J.; Daly, I.; Zhou, J.; Niu, Y.; Wang, X.; Cichocki, A. An optimized channel selection method based on multifrequency CSP-rank for motor imagery-based BCI system. Comput. Intell. Neurosci. 2019, 2019, 8068357. [Google Scholar] [CrossRef]
  16. Gupta, A.; Agrawal, R.K.; Kaur, B. Performance enhancement of mental task classification using EEG signal: A study of multivariate feature selection methods. Soft Comput. 2015, 19, 2799–2812. [Google Scholar] [CrossRef]
  17. Jusas, V.; Samuvel, S.G. Classification of motor imagery using a combination of user-specific band and subject-specific band for brain-computer interface. Appl. Sci. 2019, 9, 4990. [Google Scholar] [CrossRef]
  18. Ayesha, S.; Hanif, M.K.; Talib, R. Overview and comparative study of dimensionality reduction techniques for high-dimensional data. Inf. Fusion 2020, 59, 44–58. [Google Scholar] [CrossRef]
  19. Roy, S.; Rathee, D.; Chowdhury, A.; Prasad, G. Assessing impact of channel selection on decoding of motor and cognitive imagery from MEG data. J. Neural Eng. 2020, 17, 056037. [Google Scholar] [CrossRef] [PubMed]
  20. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain-computer interfaces: A 10-year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef]
  21. Thomas, E.; Dyson, M.; Clerc, M. An analysis of performance evaluation for motor-imagery based BCI. J. Neural Eng. 2013, 10, 031001. [Google Scholar] [CrossRef] [PubMed]
  22. Schlögl, A.; Kronegg, J.; Huggins, J.; Mason, S. Evaluation criteria for BCI research. In Toward Brain-Computer Interfacing; MIT Press: Cambridge, UK, 2007; Volume 1, pp. 327–342. [Google Scholar]
  23. Saha, S.; Ahmed, K.I.U.; Mostafa, R.; Hadjileontiadis, L.; Khandoker, A. Evidence of variabilities in EEG dynamics during motor imagery-based multiclass brain–computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 371–382. [Google Scholar] [CrossRef]
  24. Cachón, A.; Vázquez, R.A. Tuning the parameters of an integrate and fire neuron via a genetic algorithm for solving pattern recognition problems. Neurocomputing 2015, 148, 187–197. [Google Scholar] [CrossRef]
  25. Rodrigues, P.L.C.; Jutten, C.; Congedo, M. Riemannian Procrustes analysis: Transfer learning for brain–computer interfaces. IEEE Trans. Biomed. Eng. 2019, 66, 2390–2401. [Google Scholar] [CrossRef]
  26. Zhu, X.; Li, P.; Li, C.; Yao, D.; Zhang, R.; Xu, P. Separated channel convolutional neural network to realize the training-free motor imagery BCI systems. Biomed. Signal Process. Control 2019, 49, 396–403. [Google Scholar] [CrossRef]
  27. Joadder, M.; Siuly, S.; Kabir, E.; Wang, H.; Zhang, Y. A new design of mental state classification for subject-independent BCI systems. IRBM 2019, 40, 297–305. [Google Scholar] [CrossRef]
  28. Zhao, X.; Zhao, J.; Cai, W.; Wu, S. Transferring common spatial filters with semi-supervised learning for zero-training motor imagery brain-computer interface. IEEE Access 2019, 7, 58120–58130. [Google Scholar] [CrossRef]
  29. Kwon, O.; Lee, M.; Guan, C.; Lee, S. Subject-independent brain-computer interfaces based on deep convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3839–3852. [Google Scholar] [CrossRef] [PubMed]
  30. Lee, M.H.; Kwon, O.Y.; Kim, Y.J.; Kim, H.K.; Lee, Y.E.; Williamson, J.; Fazli, S.; Lee, S.W. EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience 2019, 8, giz002. [Google Scholar] [CrossRef]
  31. Yu, Y.; Zhou, Z.; Yin, E.; Jiang, J.; Tang, J.; Liu, Y.; Hu, D. Toward brain-actuated car applications: Self-paced control with a motor imagery-based brain-computer interface. Comput. Biol. Med. 2016, 77, 148–155. [Google Scholar] [CrossRef]
  32. Yu, Y.; Zhou, Z.; Liu, Y.; Jiang, J.; Yin, E.; Zhang, N.; Wang, Z.; Liu, Y.; Wu, X.; Hu, D. Self-paced operation of a wheelchair based on a hybrid brain-computer interface combining motor imagery and P300 potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 2516–2526. [Google Scholar] [CrossRef]
  33. Rong, H.; Li, C.; Bao, R.; Chen, B. Incremental adaptive EEG classification of motor imagery-based BCI. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  34. Asensio-Cubero, J.; Gan, J.; Palaniappan, R. Multiresolution analysis over graphs for a motor imagery-based online BCI game. Comput. Biol. Med. 2015, 68, 21–26. [Google Scholar] [CrossRef]
  35. Škola, F.; Tinková, S.; Liarokapis, F. Progressive training for motor imagery brain-computer interfaces using gamification and virtual reality embodiment. Front. Hum. Neurosci. 2019, 13, 329. [Google Scholar] [CrossRef]
  36. Mohamed, A.F.; Jusas, V. Developing innovative feature extraction techniques from the emotion recognition field on motor imagery using brain–computer interface EEG signals. Appl. Sci. 2024, 14, 11323. [Google Scholar] [CrossRef]
  37. Moaveninejad, S.; D’Onofrio, V.; Tecchio, F.; Ferracuti, F.; Iarlori, S.; Monteriù, A.; Porcaro, C. Fractal dimension as a discriminative feature for high-accuracy classification in motor imagery EEG-based brain-computer interface. Comput. Methods Programs Biomed. 2024, 244, 107944. [Google Scholar] [CrossRef]
  38. Cottone, C.; Porcaro, C.; Cancelli, A.; Olejarczyk, E.; Salustri, C.; Tecchio, F. Neuronal electrical ongoing activity as a signature of cortical areas. Brain Struct. Funct. 2017, 222, 2115–2126. [Google Scholar] [CrossRef]
  39. Di Ieva, A.; Grizzi, F.; Jelinek, H.; Pellionisz, A.J.; Losa, G.A. Fractals in the neurosciences, part I: General principles and basic neurosciences. Neuroscientist 2014, 20, 403–417. [Google Scholar] [CrossRef]
  40. Marino, M.; Liu, Q.; Samogin, J.; Tecchio, F.; Cottone, C.; Mantini, D.; Porcaro, C. Neuronal dynamics enable the functional differentiation of resting state networks in the human brain. Hum. Brain Mapp. 2019, 40, 1445–1457. [Google Scholar] [CrossRef]
  41. Porcaro, C.; Di Renzo, A.; Tinelli, E.; Di Lorenzo, G.; Parisi, V.; Caramia, F.; Fiorelli, M.; Di Piero, V.; Pierelli, F.; Coppola, G. Haemodynamic activity characterization of resting state networks by fractal analysis and thalamocortical morphofunctional integrity in chronic migraine. J. Headache Pain 2020, 21, 1. [Google Scholar] [CrossRef] [PubMed]
  42. Porcaro, C.; Marino, M.; Carozzo, S.; Russo, M.; Ursino, M.; Ruggiero, V.; Ragno, C.; Proto, S.; Tonin, P. Fractal dimension feature as a signature of severity in disorders of consciousness: An EEG study. Int. J. Neural Syst. 2022, 32, 2250031. [Google Scholar] [CrossRef]
  43. Smits, F.M.; Porcaro, C.; Cottone, C.; Cancelli, A.; Rossini, P.M.; Tecchio, F. Electroencephalographic fractal dimension in healthy ageing and Alzheimer’s disease. PLoS ONE 2016, 11, e0149587. [Google Scholar] [CrossRef] [PubMed]
  44. Borri, A.; Cerasa, A.; Tonin, P.; Citrigno, L.; Porcaro, C. Characterizing fractal genetic variation in the human genome from the HapMap project. Int. J. Neural Syst. 2022, 32, 2250028. [Google Scholar] [CrossRef] [PubMed]
  45. Liu, Y.H.; Huang, S.; Huang, Y.D. Motor imagery EEG classification for patients with amyotrophic lateral sclerosis using fractal dimension and Fisher’s criterion-based channel selection. Sensors 2017, 17, 1557. [Google Scholar] [CrossRef]
  46. Phothisonothai, M.; Watanabe, K. Optimal fractal feature and neural network: EEG-based BCI applications. In Brain-Computer Interface Systems–Recent Progress and Future Prospects; IntechOpen: Rijeka, Croatia, 2013; pp. 91–113. [Google Scholar]
  47. Falconer, K. Fractal Geometry: Mathematical Foundations and Applications; Wiley: Hoboken, NJ, USA, 1990. [Google Scholar]
  48. Peng, C.-K.; Havlin, S.; Stanley, H.E.; Goldberger, A.L. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos 1995, 5, 82–87. [Google Scholar] [CrossRef]
  49. Kantelhardt, J.W.; Zschiegner, S.A.; Koscielny-Bunde, E.; Havlin, S.; Bunde, A.; Stanley, H.E. Multifractal detrended fluctuation analysis of nonstationary time series. Physica A 2002, 316, 87–114. [Google Scholar] [CrossRef]
  50. Grassberger, P.; Procaccia, I. Measuring the strangeness of strange attractors. Physica D 1983, 9, 189–208. [Google Scholar] [CrossRef]
  51. Katz, M.J. Fractals and the analysis of waveforms. Comput. Biol. Med. 1988, 18, 145–156. [Google Scholar] [CrossRef]
  52. Hatamikia, S.; Nasrabadi, A.M. Recognition of emotional states induced by music videos based on nonlinear feature extraction and SOM classification. In Proceedings of the 2014 21st Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 26–28 November 2014; IEEE: New York, NY, USA; pp. 333–337. [Google Scholar]
  53. Higuchi, T. Approach to an irregular time series on the basis of the fractal theory. Phys. D Nonlinear Phenom. 1988, 31, 277–283. [Google Scholar] [CrossRef]
  54. Yang, J.; Wu, Z.; Peng, K.; Okolo, P.N.; Zhang, W.; Zhao, H.; Sun, J. Parameter selection of Gaussian kernel SVM based on local density of training set. Inverse Probl. Sci. Eng. 2021, 29, 536–548. [Google Scholar] [CrossRef]
  55. Azuaje, F.; Witten, I.H.; Frank, E. Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed.; Morgan Kaufmann Publishers: San Francisco, CA, USA, 2005; p. 560. ISBN 0-12-088407-0. [Google Scholar]
  56. Hsu, C.W. A Practical Guide to Support Vector Classification; Department of Computer Science, National Taiwan University: Taipei, Taiwan, 2003. [Google Scholar]
  57. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the COMPSTAT’2010: 19th International Conference on Computational Statistics, Paris, France, 22–27 August 2010; Physica-Verlag HD: Heidelberg, Germany, 2010; pp. 177–186. [Google Scholar]
  58. Hirano, K.; Nishimura, S.; Mitra, S. Design of digital notch filters. IEEE Trans. Commun. 1974, 22, 964–970. [Google Scholar] [CrossRef]
  59. Hussin, S.F.; Birasamy, G.; Hamid, Z. Design of Butterworth band-pass filter. Politeknik Kolej Komuniti J. Eng. Technol. 2016, 1, 32–46. [Google Scholar]
  60. Lemos, M.S.; Fisch, B.J. The weighted average reference montage. Electroencephalogr. Clin. Neurophysiol. 1991, 79, 361–370. [Google Scholar] [CrossRef]
  61. Diamantidis, N.A.; Karlis, D.; Giakoumakis, E.A. Unsupervised stratification of cross-validation for accuracy estimation. Artif. Intell. 2000, 116, 1–6. [Google Scholar] [CrossRef]
  62. Brunner, C.; Leeb, R.; Müller-Putz, G.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008–Graz data set A. Inst. Knowl. Discov. (Laboratory Brain-Comput. Interfaces) Graz Univ. Technol. 2008, 16, 34. [Google Scholar]
  63. Kim, H.; Luo, J.; Chu, S.; Cannard, C.; Hoffmann, S.; Miyakoshi, M. ICA’s bug: How ghost ICs emerge from effective rank deficiency caused by EEG electrode interpolation and incorrect re-referencing. Front. Signal Process. 2023, 3, 1064138. [Google Scholar] [CrossRef]
  64. Miladinović, A.; Ajčević, M.; Jarmolowska, J.; Marusic, U.; Silveri, G.; Battaglini, P.P.; Accardo, A. Performance of EEG motor-imagery based spatial filtering methods: A BCI study on stroke patients. Procedia Comput. Sci. 2020, 176, 2840–2848. [Google Scholar] [CrossRef]
  65. Zhang, Y.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain–computer interface. J. Neurosci. Methods 2015, 255, 85–91. [Google Scholar] [CrossRef]
  66. Izzuddin, T.A.; Safri, N.M.; Othman, M.A. Compact convolutional neural network (CNN) based on SincNet for end-to-end motor imagery decoding and analysis. Biocybern. Biomed. Eng. 2021, 41, 1629–1645. [Google Scholar] [CrossRef]
  67. She, Q.; Hu, B.; Luo, Z.; Nguyen, T.; Zhang, Y. A hierarchical semi-supervised extreme learning machine method for EEG recognition. Med. Biol. Eng. Comput. 2019, 57, 147–157. [Google Scholar] [CrossRef]
  68. Wang, J.; Yao, L.; Wang, Y. IFNet: An interactive frequency convolutional neural network for enhancing motor imagery decoding from EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1900–1911. [Google Scholar] [CrossRef]
  69. Barachant, A.; Bonnet, S.; Congedo, M.; Jutten, C. Multiclass brain-computer interface classification by Riemannian geometry. IEEE Trans. Biomed. Eng. 2012, 59, 920–928. [Google Scholar] [CrossRef] [PubMed]
  70. Luo, T.-J. Parallel genetic algorithm-based common spatial patterns selection on time-frequency decomposed EEG signals for motor imagery brain-computer interface. Biomed. Signal Process. Control 2023, 80, 104397. [Google Scholar] [CrossRef]
  71. Fang, H.; Jin, J.; Daly, I.; Wang, X.Y. Feature extraction method based on filter banks and Riemannian tangent space in motor-imagery BCI. IEEE J. Biomed. Health Inf. 2022, 26, 2504–2514. [Google Scholar] [CrossRef] [PubMed]
  72. Li, Z.; Tan, X.; Li, X.; Yin, L. Multiclass motor imagery classification with Riemannian geometry and temporal-spectral selection. Med. Biol. Eng. Comput. 2024, 62, 2961–2973. [Google Scholar] [CrossRef]
  73. Hersche, M.; Rellstab, T.; Schiavone, P.D.; Cavigelli, L.; Benini, L.; Rahimi, A. Fast and accurate multiclass inference for MI-BCIs using large multiscale temporal and spectral features. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; IEEE: New York, NY, USA; pp. 1690–1694. [Google Scholar]
Figure 1. Representation of the temporal progression related to executing motor imagery tasks.
Figure 1. Representation of the temporal progression related to executing motor imagery tasks.
Applsci 15 06021 g001
Figure 2. Left: The electrode configuration based on the international 10–20 system. Right: The setup of the three monopolar EOG channels for electrooculographic recordings.
Figure 2. Left: The electrode configuration based on the international 10–20 system. Right: The setup of the three monopolar EOG channels for electrooculographic recordings.
Applsci 15 06021 g002
Figure 3. The FD-based pipeline implemented.
Figure 3. The FD-based pipeline implemented.
Applsci 15 06021 g003
Figure 4. Comparison of established motor imagery methods with our proposed approach.
Figure 4. Comparison of established motor imagery methods with our proposed approach.
Applsci 15 06021 g004
Figure 5. Classification accuracy of various techniques across different participants, together with the related mean performance.
Figure 5. Classification accuracy of various techniques across different participants, together with the related mean performance.
Applsci 15 06021 g005
Table 1. The list of fractal dimension combinations.
Table 1. The list of fractal dimension combinations.
1st FD2nd FD3rd FD
PetrosianBox-CountingDFA
PetrosianMFDFADFA
PetrosianCorrelation DimensionMFDFA
HiguchiCorrelation DimensionDFA
KatzPetrosianHiguchi
KatzPetrosianBox-Counting
KatzCorrelation DimensionMFDFA
KatzPetrosianCorrelation Dimension
KatzBox-CountingCorrelation Dimension
Table 2. The list of optimized parameters.
Table 2. The list of optimized parameters.
ParameterRelated toValue
notch_filter_fsNotch Filter250.0
notch_filter_powerline_freqNotch Filter50
notch_filter_qNotch Filter30.0
high_pass_filter_fsHigh-Pass Filter250
high_pass_cutoffHigh-Pass Filter0.5
car_montage_filter_axisCAR Montage Filter1
car_montage_filter_keep_dimsCAR Montage FilterTrue
higuchi_k_max_valueHiguchi Fractal Dimension10
box_counting_n_scalesBox-Counting Fractal Dimension10
box_counting_min_box_sizeBox-Counting Fractal Dimension1
box_counting_max_box_sizeBox-Counting Fractal Dimension874
correlation_dimension_emb_dimCorrelation Dimension Fractal Dimension5
correlation_delayCorrelation Dimension Fractal Dimension5
correlation_dimension_r_valuesCorrelation Dimension Fractal Dimensionnp.logspace(−3, 0, 30)
correlation_dimension_epsilonCorrelation Dimension Fractal Dimension1 × 10−12
mfdfa_q_valuesMFDFA Fractal Dimension[−5, −4, …, 4, 5]
mfdfa_min_scaleMFDFA Fractal Dimension4
mfdfa_max_scaleMFDFA Fractal Dimension437
mfdfa_scale_ratioMFDFA Fractal Dimension2.0
dfa_min_scaleDFA Fractal Dimension4
dfa_max_scaleDFA Fractal Dimension437
dfa_scale_ratioDFA Fractal Dimension2.0
filter_bank_typeFilters Bank Coefficientsbutter
filter_bank_orderFilters Bank Coefficients2
filter_bank_max_freqFilters Bank Coefficients40
time_windows_fltFilters Bank Coefficients[2.5, 3.5] … [2.5, 6]
bwBandwidth of Filtered Signals[2, 4, 8, 16, 32]
no_cspNo. of CSP Features24
cart_max_depthCART10
cart_random_stateCART1
cart_criterionCARTGini
cart_splitterCARTbest
cart_min_samples_splitCART2
cart_min_samples_leafCART1
linear_svc_cLinearSVM0.1
linear_svc_intercept_scalingLinearSVM1
linear_svc_lossLinearSVMhinge
linear_svc_max_iterLinearSVM1000
linear_multi_classLinearSVMovr
linear_svc_penaltyLinearSVMl2
linear_svc_random_stateLinearSVM1
linear_svc_tolLinearSVM0.00001
svc_w_poly_kernel_cSVM Polynomial Kernel0.1
svc_w_poly_kernel_typeSVM Polynomial Kernelpoly
svc_w_poly_kernel_degreeSVM Polynomial Kernel10
svc_w_poly_kernel_gammaSVM Polynomial Kernelauto
svc_w_poly_kernel_coef0SVM Polynomial Kernel0.0
svc_w_poly_kernel_tolSVM Polynomial Kernel0.001
svc_w_poly_kernel_cache_sizeSVM Polynomial Kernel10,000
svc_w_poly_kernel_max_iterSVM Polynomial Kernel−1
svc_w_poly_kernel_decision_fxSVM Polynomial Kernelovr
gsvm_cGSVM20
gsvm_kernel_typeGSVMrbf
gsvm_degreeGSVM10
gsvm_gammaGSVMauto
gsvm_coef0GSVM0.0
gsvm_tolGSVM0.001
gsvm_cache_sizeGSVM10,000
gsvm_max_iterGSVM−1
gsvm_decision_fxGSVMovr
sgd_lossSGDhinge
sgd_penaltySGDl2
sgd_max_iterSGD1000
sgd_tolSGD0.001
sgd_alphaSGD0.1
sgd_random_stateSGD1
fsBCI IV 2a Set Sampling Frequency250.0
no_channelsNo. of EEG Channels22
no_subjectsNo. of Subjects9
no_classesNo. of Classes4
no_splitsNo. of Folds in Cross-Validation5
Table 3. The results of all the fractal dimension combinations.
Table 3. The results of all the fractal dimension combinations.
Fractal DimensionsClassifierSub. 1Sub. 2Sub. 3Sub. 4Sub. 5Sub. 6 Sub. 7Sub. 8Sub. 9Mean
PFD vs. BCFD vs. DFA LinearSVM84.95709062.5881.6947.8892.5990.1387.3178.57
GSVM84.6169.2590.7463.3379.7848.3591.1189.3786.8778.16
CART64.9045.5574.4439.2953.0531.0384.1662.9077.2059.17
SVM Poly83.1462.5984.8156.4079.7651.0786.7089.3784.7675.40
SGD82.0963.7084.0753.7359.5536.5178.2087.0878.4469.26
PFD vs. MFDFA vs. DFALinearSVM84.95709062.5882.0747.8892.5990.1387.3178.61
GSVM84.6169.2590.7463.3379.7848.3591.1189.3786.8778.16
CART64.9045.5574.4439.2952.6731.0384.1662.5277.2059.08
SVM Poly83.1462.5984.8156.4079.7651.0786.3389.3784.7675.36
SGD81.7363.7084.0749.5156.9034.2180.8073.1280.5567.18
PFD vs. CDFD vs. MFDFALinearSVM84.9569.629062.9682.4647.8892.5990.1387.3178.66
GSVM84.617090.7464.8480.1848.3590.7489.3786.8778.41
CART65.6242.2274.0735.1054.9731.0384.1659.4878.4558.34
SVM Poly83.5162.5984.8156.7979.7651.0785.9689.3784.7675.40
SGD80.6357.0385.1849.8958.3837.4279.6787.4679.7568.38
HFD vs. CDFD vs. DFALinearSVM84.9569.629063.7382.0847.8892.5990.1387.3178.70
GSVM84.617090.7464.8480.1848.3590.7489.3786.8778.41
CART64.8942.2273.7035.1054.9731.0384.1659.4878.4558.22
SVM Poly83.5162.2284.8156.7979.7651.0785.9689.3784.7675.36
SGD83.9159.2583.7051.4358.3936.0582.2687.8380.5969.27
KFD vs. PFD vs. HFDLinearSVM84.9669.2590.3764.8882.0747.8892.9690.8887.3178.95
GSVM84.2469.2590.3763.3279.4147.8991.4889.7586.8778.07
CART64.9045.5574.4438.9153.0531.0384.1662.5277.2059.08
SVM Poly83.5062.5985.1856.0479.3850.6286.7088.9984.7675.31
SGD78.4358.8887.4046.9362.2435.1376.3483.3383.0867.97
KFD vs. PFD vs. BCFD LinearSVM84.9669.6290.3765.2682.0747.8892.9690.8887.3179.04
GSVM84.2469.6290.3763.3279.4147.8991.4889.7586.8778.11
CART64.9045.5574.4438.5253.0531.0384.1662.5277.2059.04
SVM Poly83.5062.5985.1856.0479.3850.6286.7088.9984.7675.31
SGD79.1760.7485.9245.0263.7632.4183.0081.0582.2368.14
KFD vs. CDFD vs. MFDFALinearSVM84.96709065.6581.7047.8892.9691.2687.3179.08
GSVM84.2469.629064.4779.8047.8991.4889.7586.8778.24
CART64.8942.2274.0735.1054.9731.0384.1659.4878.4558.26
SVM Poly83.5162.9685.1856.0379.7650.6286.3389.3784.7675.39
SGD80.9764.0785.1843.8258.8432.8676.7584.8483.5167.87
KFD vs. PFD vs. CDFDLinearSVM84.967090.3765.6582.0847.8892.9690.8887.3179.12
GSVM84.2469.629064.4779.8047.8991.4889.7586.8778.24
CART66.7242.9670.3738.5554.9532.3884.1664.0277.6159.08
SVM Poly83.5162.9685.1856.0379.7650.6286.3389.3784.7675.39
SGD80.9858.5183.7048.8457.3332.8678.2082.2080.5367.02
KFD vs. BCFD vs. CDFDLinearSVM84.967090.3765.6582.0847.8892.9691.2687.3179.16
GSVM84.2469.629064.4779.8047.8991.4889.7586.8778.24
CART66.7242.9670.3738.1754.9533.7884.1664.3977.6159.23
SVM Poly83.5162.9685.1856.0379.7650.6286.3389.3784.7675.39
SGD80.6158.5182.9645.3857.3332.4178.9481.4280.5366.46
Table 4. The mean of all classifiers’ performance, depending on each FD combination.
Table 4. The mean of all classifiers’ performance, depending on each FD combination.
Fractal DimensionsLinearSVMGSVMCARTSVM “Poly”SGD
PFD vs. BCFD vs. DFA78.5778.1659.1775.469.26
PFD vs. MFDFA vs. DFA78.6178.1659.0875.3667.18
PFD vs. CDFD vs. MFDFA78.6678.4158.3475.468.38
HFD vs. CDFD vs. DFA78.778.4158.2275.3669.27
KFD vs. PFD vs. HFD78.9578.0759.0875.3167.97
KFD vs. PFD vs. BCFD79.0478.1159.0475.3168.14
KFD vs. CDFD vs. MFDFA79.0878.2458.2675.3967.87
KFD vs. PFD vs. CDFD79.1278.2459.0875.3967.02
KFD vs. BCFD vs. CDFD79.1678.2459.2375.3966.46
Mean78.8878.2358.8375.3767.95
Table 5. Comparative classification accuracy on Dataset 2a: our method vs. other approaches.
Table 5. Comparative classification accuracy on Dataset 2a: our method vs. other approaches.
MethodYearA01A02A03A04A05A06A07A08A09Mean
TSLDA201280.551.387.559.345.055.382.184.886.170.2
Multi-Scale CSP201886.857.286.561.461.250.792.487.879.173.7
Multi-Scale Riemannian201890.055.481.371.969.656.785.683.884.975.5
HSS-ELM201981.149.978.063.344.049.481.181.581.467.8
SincNet202175.239.579.449.162.739.364.474.964.163.1
FBRTS202286.165.290.063.875.652.491.189.086.577.7
TSFBCSP-GA202386.559.089.269.463.254.587.280.281.674.5
IFNet202388.556.491.873.869.760.489.285.488.778.2
TWSB202489.366.989.369.374.160.189.488.085.679.1
KFD vs. PFD vs. HFD (Updated)202485.069.390.464.982.147.993.090.987.379.0
KFD vs. BCFD vs. CDFD 85.070.090.465.782.147.993.091.387.379.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohamed, A.F.; Jusas, V. Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications. Appl. Sci. 2025, 15, 6021. https://doi.org/10.3390/app15116021

AMA Style

Mohamed AF, Jusas V. Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications. Applied Sciences. 2025; 15(11):6021. https://doi.org/10.3390/app15116021

Chicago/Turabian Style

Mohamed, Amr F., and Vacius Jusas. 2025. "Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications" Applied Sciences 15, no. 11: 6021. https://doi.org/10.3390/app15116021

APA Style

Mohamed, A. F., & Jusas, V. (2025). Advancing Fractal Dimension Techniques to Enhance Motor Imagery Tasks Using EEG for Brain–Computer Interface Applications. Applied Sciences, 15(11), 6021. https://doi.org/10.3390/app15116021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop