Next Article in Journal
Surge Current Analysis of High-Power Press Pack Diodes: Junction Temperature and Forward-Voltage Modeling
Previous Article in Journal
Optimal Scheduling of Hydro–Thermal–Wind–Solar–Pumped Storage Multi-Energy Complementary Systems Under Carbon-Emission Constraints: A Coordinated Model and SVBABC Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multimodal Mutual Information Extraction and Source Detection with Application in Focal Seizure Localization

1
Department of Electrical, Computer and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
2
Department of Electrical Engineering, University of Isfahan, Isfahan 81746-73461, Iran
3
Department of Electrical Engineering, University of Guilan, Rasht 41996-13776, Iran
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(24), 4897; https://doi.org/10.3390/electronics14244897
Submission received: 28 October 2025 / Revised: 6 December 2025 / Accepted: 9 December 2025 / Published: 12 December 2025

Abstract

Current multimodal imaging–based source localization (SoL) methods often rely on synchronously recorded data, and many neural network–driven approaches require large training datasets, conditions rarely met in clinical neuroimaging. To address these limitations, we introduce MieSoL (Multimodal Mutual Information Extraction and Source Localization), a unified framework that fuses EEG and MRI, whether acquired synchronously or asynchronously, to achieve robust cross-modal information extraction and high-accuracy SoL. Targeting neuroimaging applications, MieSoL combines Magnetic Resonance Imaging (MRI) and Electroencephalography (EEG), leveraging their complementary strengths—MRI’s high spatial resolution and EEG’s superior temporal resolution. MieSoL addresses key limitations of existing SoL methods, including poor localization accuracy and an unreliable estimation of the true source number. The framework combines two existing components—Unified Left Eigenvectors (ULeV) and Efficient High-Resolution sLORETA (EHR-sLORETA)—but integrates them in a novel way: ULeV is adapted to extract a noise-resistant shared latent representation across modalities, enabling cross-modal denoising and an improved estimation of the true source number (TSN), while EHR-sLORETA subsequently performs anatomically constrained high-resolution inverse mapping on the purified subspace. While EHR-sLORETA already demonstrates superior localization precision relative to sLORETA, replacing conventional PCA/ICA preprocessing with ULeV provides substantial advantages, particularly when data are scarce or asynchronously recorded. Unlike PCA/ICA approaches, which perform denoising and source selection separately and are limited in capturing shared information, ULeV jointly processes EEG and MRI to perform denoising, dimension reduction, and mutual-information-based feature extraction in a unified step. This coupling directly addresses longstanding challenges in multimodal SoL, including inconsistent noise levels, temporal misalignment, and the inefficiency of traditional PCA-based preprocessing. Consequently, on synthetic datasets, MieSoL achieves 40% improvement in Average Correlation Coefficient (ACC) and 56% reduction in Average Error Estimation (AEE) compared to conventional techniques. Clinical validation involving 26 epilepsy patients further demonstrates the method’s robustness, with automated results aligning closely with expert epileptologist assessments. Overall, MieSoL offers a principled and interpretable multimodal fusion paradigm that enhances the fidelity of EEG source localization, holding significant promise for both clinical and cognitive neuroscience applications.

1. Introduction

Neuroimaging is vital in advancing our understanding and treatment of brain disorders. Various imaging techniques each contribute unique insights into brain function. High-spatial-resolution imaging methods, such as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Computed Tomography (CT) scans, provide detailed anatomical views of the brain. These techniques are invaluable in identifying structural abnormalities or pathologies [1,2]. However, their utility is constrained by a low temporal resolution, making them less effective in capturing rapid neural dynamics. In contrast, high-temporal-resolution neuroimaging methods such as Electroencephalography (EEG), excel in monitoring real-time brain activity in a non-invasive and cost-efficient manner, offering high temporal resolution [3]. However, these methods have a significant limitation in their low spatial resolution, which restricts their ability to accurately pinpoint the exact locations of neural activity within the brain. Many neurological conditions, such as schizophrenia [4], Alzheimer’s disease [5], and epilepsy [6,7], necessitate an imaging approach that encompasses both high temporal and spatial resolution for accurate diagnosis and treatment. In these conditions, it is essential to first identify and distinguish the brain activities related to the neurological condition, such as an epileptic episode, from normal activities using high-temporal-resolution data [8,9]. Subsequently, determining the spatial location of these activities within the brain is crucial. In epilepsy, for example, an epileptologist must be able to differentiate characteristic seizure waves from normal brain activity. Interpreting EEG data visually to localize epileptic sources is time-consuming and demands substantial expertise. To enhance the quality of EEG data, several preprocessing methods have been proposed [10,11,12,13,14,15,16,17,18,19,20,21]. For spatial resolution, EEG’s low spatial resolution is addressed through EEG inverse problem solutions. These methods calculate the source location, orientation, and magnitude within the brain using electric potential measurements from the scalp and information about head geometry. This forward modeling process considers the conductivity properties of different brain layers and the head’s shape. The skull and other layers between the brain cortex and sensors reduce spatial resolution and complicate accurate source detection in noisy environments. The EEG measurements are spatially smoothed representations of neural activities due to the volume conduction properties of the skull [22]. Advanced data processing approaches aim to provide potential seizure zones, but these typically only identify larger areas such as the hemispheres or lobes, which lack precision [22,23,24,25,26]. Existing zoning algorithms struggle to precisely locate seizure sources, which is the focus of this paper. These methods often rely on either EEG or MRI data, limiting their ability to precisely locate seizure zones. Alternatively, invasive EEG approaches try to compensate for these disadvantages [27,28].
Existing multimodal fusion strategies fall broadly into early (input-level), intermediate (feature- or latent-level), and late (decision-level) approaches; however, all exhibit critical limitations when applied to epileptic source localization [29,30,31]. Early fusion methods, which concatenate raw or low-level features from EEG and MRI, assume matched noise characteristics and temporal compatibility between modalities and are therefore brittle in real clinical settings where EEG and MRI differ substantially in SNR and acquisition conditions. Intermediate fusion techniques—including recent EEG–MRI feature-fusion models—have shown promise for classification tasks such as epilepsy detection [32], but such methods do not guarantee anatomically precise or physiologically meaningful source localization. More clinically oriented multimodal pipelines that combine scalp or high-density EEG, intracranial EEG, structural MRI, PET/SPECT, or other modalities remain heavily dependent on expert manual integration; their diagnostic performance varies considerably across patients and lacks a standardized analytic fusion framework [33]. Even in concurrent EEG–fMRI studies, although functional network alterations can be captured, recent work shows that these approaches still fall short of delivering an automated, anatomically specific localization of the epileptogenic zone [34]. Consequently, despite the increased use of multimodal neuroimaging in epilepsy evaluation, a robust, noise-tolerant, and physiologically coherent multimodal fusion method tailored specifically for epileptogenic-source localization remains lacking. This gap motivates the proposed framework.
Current imaging methodologies, which focus on a single modality [35,36,37,38,39,40,41,42], fail to provide a comprehensive understanding of conditions requiring both spatial and temporal resolution. While machine learning approaches using neural networks (NNs) have been proposed for seizure patient classification and are successful in seizure pattern prediction, training an NN for this type of source localization is impractical due to the required amount of data for the purpose of convergence and precision [43,44,45,46,47,48].
Multimodal imaging and multimodal biomedical data processing have gained great attention for the purpose of more accurate diagnoses and decision making [49,50,51,52,53,54,55]. Many of these approaches either require a large amount of data and the use of machine learning algorithms with huge computational complexity, or are sensitive to the synchronicity of the multimodal data, i.e., they require the simultaneous recording of these multimodal data. However, there are practical scenarios in which the multimodal data, such as EEG and MRI, cannot be recorded simultaneously. In this case, the only option for source localization is to use the standardized low-resolution brain electromagnetic tomography (sLORETA). It has been shown that Efficient High-Resolution sLORETA (EHR-sLORETA) enhances spatial precision while preserving the mathematical guarantees and interpretability of sLORETA by efficiently reducing the noise effect in the MRI data [56]. On the other hand, the precision is also sensitive to the noise effects of the available EEG. Conventionally, the available methods of PCA and ICA can be used for the EEG denoising and source separation. Several methods of multimodal fusion for brain imaging utilize combinations of PCA and ICA [57]. These approaches usually impose strong statistical assumptions that may not align with the biological reality of the data and fail in precision when limited to a small number of subjects. Still, combining PCA and ICA in a traditional manner or in more recently developed multimodal forms is possible [49,58]. Note that the source separation methods require a method for the number of source estimation. We propose to use Mean Square Eigenvalue Error (MSEE), which has been shown to outperform other competitive methods that are mostly in the form of thresholding approaches [59]. In all these approaches, PCA and ICA are utilized separately and consecutively in the procedure. The goal of the proposed method is to utilize a method that can extract the mutual (common) information of the modalities in the denoising and source selection procedure. The notion of mutual information extraction strategy has been proposed for this form of joint or common feature information extraction [60]. Multimodal feature extraction is not for correlation detection but for the purpose of combining the modalities to extract the common information in the structure of the modals for the source separation, dimension reduction, and denoising of the modals. This work proposes a localization approach denoted Multimodal Information Extraction and Source Localization (MieSoL), which utilizes high-temporal data and high-spatial measures simultaneously. The method incorporates the optimum source number selection method MSEE and the optimum source localization method EHR-sLoreta. However, the use of conventional PCA/ICA denoising approaches is usually not effective enough and the remaining EEG noise can propagate into the source localization step. Therefore, the proposed method combines EEG and MRI to simultaneously denoise and extract the best basis. The method then employs source localization to accurately estimate the location and orientation of foci sources, which are then mapped onto the spatial data. The multimodal denoising step combines the basis extraction method, Unified Left Eigenvectors (ULeV) with the MSEE denoising approach [59,61]. This mutual information extraction, as opposed to the conventional PCA/ICA approaches, enables MieSoL to incorporate EHR-sLORETA to estimate the precise location and orientation of sources based on the denoised EEG dataset and the corresponding locations are mapped onto the MRI data [56]. Together, these elements constitute a unified and analytic multimodal fusion paradigm that (i) does not rely on large training datasets or deep models, (ii) is inherently robust to asynchronous and noise-mismatched modalities, and (iii) directly enhances the fidelity of EEG source localization by exploiting cross-modal evidence rather than treating modalities independently.
The foundation of important components of MieSoL for automatic hyperparameter selection is based on the information-theoretic approach for data-driven modeling inspired by innovative developments in statistical learning theory [62]. This method has shown impressive results in various applications such as LTI system modeling and time delay estimation [63], hyperparameter selection in clustering, ARMA modeling, and subspace identification [64,65,66]. MieSoL is compared with existing source localizing approaches for both synthetic and real datasets for the case of epilepsy using EEG as temporal data and MRI as spatial data. The results on the synthetic EEG and MRI data show great advantages of the proposed method in localizing sources in the sense of precision and robustness to the difference between EEG SNR and MRI SNR, which is a practical issue in many applications. The proposed automated method’s results for synthetic data are fully consistent with epileptologist-detected zones in 26 epileptic patients, highlighting its potential to assist in pre-surgical planning and in reducing surgical risk. It is important to mention that available EEG and MRI data are not recorded simultaneously, yet the proposed method is able to perform efficiently.

2. Multimodal Information Extraction and Source Localization (MieSoL)

The work considers utilizing two types of data, denoted as Modal 1 for high-temporal-resolution data and Modal 2 for high-spatial-resolution data, for the purpose of source localization. Examples of Modal 1 include EEG and EMG data, while examples of Modal 2 include structural MRI (sMRI) and CT scan data. High-temporal modalities, such as EEG, provide detailed information about the dynamic characteristics of neural activity, whereas high spatial modalities (e.g., sMRI) assist in precisely locating the source.
However, existing source localization methods often process each modality separately or fuse them at a later stage, which can lead to the loss of valuable cross-modal information and the introduction of noise artifacts. This limits the accuracy and robustness of source localization. Furthermore, the available temporal data is inherently noisy with an unknown signal-to-noise ratio (SNR), which poses additional challenges for accurate source estimation.
To address these issues, the proposed method consists of two main components. The first part is Multimodal Information Extraction (Mie), which integrates Unified Left Eigenvectors (ULeV) [61] and Mean Square Eigenvalue Error (MSEE) [59] to determine the optimal number of bases. This step effectively separates common foci source bases from noise and artifacts in the datasets. MSEE, an efficient SVD-based denoising approach rooted in information-theoretic principles [62], plays a critical role in this process. The novelty of MieSoL lies in its ability to utilize both available temporal and spatial data (EEG and MRI) to extract an optimal number of bases that best represent the underlying noiseless temporal data.
Figure 1 illustrates our proposed framework. Unlike existing methods, this approach denoises the two modalities simultaneously, enabling the extraction of the most informative and relevant features across domains. This integration enhances robustness to noise and improves the quality of the source signals.
The second component is Efficient Source Localization (ES), which applies our recently proposed Efficient High-Resolution sLORETA (EHR-sLORETA) [56] algorithm. This step uses the denoised data to accurately estimate the location of focal sources and map them onto the high-spatial-resolution data, achieving high precision in source localization while preserving the temporal dynamics.

2.1. Multimodal Information Extraction (Mie)

MieSoL involves two key parts: Basis Extraction and Number of Bases Selection. In the Basis Extraction step, Unified Left Eigenvectors (ULeV) processes the high-temporal datasets (such as EEG and MEG) to extract common bases that are significant for event-based source identification (such as epilepsy). Next, the Number of Bases Selection step, utilizing Mean Squared Eigenvalue Error (MSEE), estimates the optimal number of common bases. This estimation is crucial for the accurate representation of the underlying noiseless EEG data. Both steps are described below.

2.1.1. Basis Extraction: Unified Left Eigenvectors (ULeV)

ULeV is a novel basis extraction method that is able to extract the common bases among multimodal datasets [61]. This approach benefits from the theory of PCA and ICA to provide a novel method of best basis selection. However, it is able to combine these two theories in a totally new approach to simultaneously denoise and find the basis instead of implementing these two approaches in cascade. In a multimodal setting, ULeV allows the modals to have their own right eigenvectors, which preserves the stochastic properties of the modals, while the left eigenvectors are optimized over all the modals to extract the common basis between them. Here, ULeV uses the high-temporal data (Modal 1) along with the high-spatial data (Modal 2) to extract the common left bases related to the event-based sources. Letting each dataset have its own right eigenvectors while finding the common left eigenvectors has a two fold benefit. While the right eigenvectors in each of the datasets optimize for the best basis representation of the data itself, they preserve the statistical properties of the modal. On the other hand, the common left eigenvectors combine these two datasets and extract much of the information from them while simultaneously eliminating any redundancy.
The procedure is detailed as follows. Scalar values are denoted by lowercase non-bold letters, vectors by lowercase bold letters, and matrices by uppercase bold letters. Assume that there are J ( 1 j J ) numbers of datasets for Modal 1 and Modal 2, each with different noise characteristics. The first j number of datasets, denoted by j 1 ( 1 j 1 j b ) , are high-temporal datasets (Modal 1) and the rest are high-spatial datasets (Modal 2), denoted by j 2 ( j b < j 2 J ) . Each dataset is represented by matrix Y j = y j 1 , y j 2 , , y j E R E × N , where E is the number of observations and N is the length of the observations. The i t h observation ( 1 i E ) in the j t h dataset can be represented as follows:
y j i = y ¯ j i + ω j i
where ω j i is the additive noise and the noise-free data y ¯ j i is linearly represented using M bases with a coefficient vector w ¯ j i :
y ¯ j i = w ¯ j i B
where B R M × N is the unknown base dataset and w ¯ j i R 1 × M denote coefficient vector. The focus of ULeV is on the extraction of a set of common bases, denoted by B ^ from the unknown base dataset by estimating the subspaces of the spanned vectors of the datasets. In contrast with conventional ICA, which typically assumes identical additive noise characteristics across datasets and thus struggles with multimodal data, ULeV distinguishes datasets based on their unique noise characteristics. This distinction allows ULeV to excel in extracting bases with significant influence across all datasets, as detailed in [61]. The left eigenvectors, v ^ , are computed for the overall datasets, with the number of eigenvectors, m, determined after.
The procedure starts with initiating a set of right eigenvectors and optimizing for common left eigenvectors. The method iterates by changing the values of the initial right eigenvector and continuing this iterative algorithm to reach the optimum c o m m o n left eigenvectors as well as the resultant a d a p t i v e right eigenvectors. To find the first common left eigenvector, v 1 , the following optimization is solved:
max v 1 , w j 1 p j = 1 J v 1 T Y j w j p s . t . v 1 T v 1 = 1 w j T w j = 1 , 1 j J
The power p, chosen to be a large number, for example, as large as 15 in [61], aids in discarding datasets whose spanned vectors weakly correlate with v . Starting with a random initial unmixing vector w j ( 0 ) , functioning as the adaptive right eigenvector, (3) solves for the initial value of the left eigenvalue v 1 ( 0 ) . In each kth iteration, the estimated left eigenvector is used to update the individual right eigenvectors:
w j ( k + 1 ) = v ( k ) T Y j v ( k ) T Y j 2 2
The unmixing vectors and the left eigenvector are iteratively updated until two consecutive iteration values lie within an ϵ -distance of each other (e.g., ϵ = 0.001 ) [61]. Once the desired v 1 is achieved, the part of the data that is generated by this left eigenvalue is subtracted from the data and (3) is repeated for the residuals to obtain v 2 . The procedure is continued yielding the optimal common left eigenvectors V = v 1 , v 2 , , v m R m × N . In the end, a conventional ICA is applied on the matrix V to separate m numbers of bases in the form of 1 × N vector, B ^ m = b ^ 1 , b ^ 2 , , b ^ m R m × N with the corresponding unmixing matrix W j m :
B ^ m = W j m T Y j

2.1.2. Number of Basis Selection: Mean Squared Eigenvalue Error (MSEE)

The Mean Squared Eigenvalue Error (MSEE) method is utilized to determine the optimal number of bases, m, for the ULeV algorithm, specifically for analyzing high-temporal data (Modal 1), such as epileptic EEG. The effectiveness of MSEE and its superiority over conventional basis selection methods are highlighted in [59,61]. In this context, each observation in the high-temporal data (Modal 1) is a combination of m bases superimposed with White Gaussian Noise (WGN) of zero mean and unknown variance, as outlined in (2). MSEE’s objective is to minimize the mean square error between the eigenvalues of an ideal, noiseless dataset (which is typically not directly accessible) and the eigenvalues derived from the available noisy dataset. This process involves computing the singular value decomposition (SVD) of the high-temporal dataset:
Y j = U Λ Y j U T j = 1 , , j b
Here, U represents the eigenvector matrix, and Λ Y j is the diagonal matrix of eigenvalues. The mean squared error between the eigenvalues of the hypothetical noise-free dataset and the estimated eigenvalues from the actual noisy data is quantified as follows:
z m = Λ Y ¯ j Λ ^ Y j m 2 2 j = 1 , , j b
Here, . 2 2 represents norm-2 and z m is the mean squared error, Λ Y ¯ j indicates the sorted eigenvalues of the theoretical noise-free dataset, and Λ ^ Y j m refers to the first m sorted estimated eigenvalues from the noisy dataset. MSEE effectively identifies the number of bases that correspond to the hidden, underlying bases within the noisy dataset. A key strength of MSEE is its capability to automatically estimate the noise variance by utilizing the estimated eigenvalues and their statistics. It adopts a probabilistic approach to estimate the cost function z ^ m for potential values of m (ranging from 1 to M) and selects the value that minimizes this cost:
m * = min 1 m M z ^ m
The optimal number of bases, m * , ascertained through this process is then fed into the ULeV algorithm to extract a representative set of bases for the high-temporal data. The final extracted bases are denoted by B ^ m * :
B ^ m * = b ^ 1 , b ^ 2 , , b ^ m *
This process ensures that the basis selection for ULeV is optimal and specifically tailored to the characteristics of the event-based high-temporal data, such as epileptic EEG, enhancing the accuracy and reliability of the analysis. The importance of z m in (7) as a new form of loss function has recently been exercised and introduced in deep leaning medical image denoising [67].

2.2. Efficient Source Localization (ES)

Efficient Source Localization (ES) is a methodological approach encompassing both Forward Modeling and EHR-sLORETA techniques to accurately identify the sources of neural activity, particularly for events like epilepsy, using high-temporal and high-spatial data modalities.

2.2.1. Forward Model

The Forward Model in ES utilizes the optimized denoised common basis (9) from high-temporal data (Modal 1), such as event-based data, and high-spatial data (Modal 2) to generate a dataset tuned to the specific neural event of interest. The estimate of the desired noise-free data is
Y ¯ ^ j = W j m * B ^ m * j = 1 , , j b
Here, Y ¯ ^ j R E × N represents the event-tuned high-temporal data, which now contains clearer information about the source of the neural activity compared to the original dataset. This refined dataset is crucial for the subsequent source localization process.

2.2.2. Source Localization: Efficient High-Resolution sLORETA (EHR-sLORETA)

EHR-sLORETA is an advanced source localization method designed to pinpoint the location and magnitude of neural sources within the brain, particularly effective in the context of additive noise [56]. Its capabilities surpass those of traditional source localization methods in terms of accuracy and robustness, particularly regarding spatial dispersion and mean square error [56]. In applying EHR-sLORETA, a detailed mesh-grid model of the brain is constructed from the high-spatial data (Modal 2), typically derived from techniques like MRI. This model, composed of a large number of nodes, represents the neuron activity within the brain over time. Each node in this model correlates to a specific point of neural activity:
y ¯ ^ j n = L q n + ϵ j n n = 1 , , N j = 1 , , j b
In this equation, y ¯ ^ j n denotes the vector of the denoised observations at each sample time, q n R T represents the vector of neural activity at time n, L R E × T is the lead-field matrix that describes how neural sources inside the brain project onto external sensors (mapping source space to the measurement space-encoded Forward Model explained in the previous section and shown as unmixing matrix W). Lastly, ϵ j n is the additive WGN. It is important to note that the noise level in this denoised dataset is significantly reduced compared to the original dataset. EHR-sLORETA estimates the neural activity q n from the denoised observations y ¯ ^ j n , assuming a linear relationship via a projection matrix H :
q ^ n = H y ¯ ^ j n n = 1 , , N j = 1 , , j b
Following this, EHR-sLORETA standardizes the estimated neural activities q ^ n with respect to a normalization matrix D n t t determined based on maximum and minimum magnates of the neuron activity at each time point. This standardized activity level serves as an indicator of each neuron’s involvement in the event generation, allowing for the differentiation between neurons actively engaged in the event (e.g., epileptic sources) and those that are not:
q ^ s L O R , n t = q ^ n t D n t t n = 1 , , N
EHR-sLORETA then employs an automated thresholding model to select neurons with high activity levels, minimizing the mean squared error between the actual neural activity and the estimated activity. The optimal number of neural sources is determined, and their locations are marked on the high-spatial data (Modal 2), providing a precise map of the event sources within the brain.
z t = q n q ^ s L O R , n t 2 2
t * = arg min t z t
In summary, the ES approach, encompassing both Forward Modeling and EHR-sLORETA, provides a comprehensive and efficient framework for localizing the sources of neural events, utilizing the strengths of both high-temporal and high-spatial data modalities.

3. Simulation and Results

This section presents the evaluation of the Multimodal Information Extraction and Source Localization (MieSoL) method. The performance is assessed through synthetic datasets simulating both high-temporal and high-spatial resolution data and real-world patient data.

3.1. Multimodal Information Extraction Testing

In this subsection, we focus on assessing the denoising efficiency of the MieSoL method. Synthetic datasets representing high-temporal (similar to EEG data) and high-spatial (akin to MRI data) modalities were generated, each incorporating additive White Gaussian Noise at varying SNR levels. Three datasets ( J = 3 ) were generated as follows:
Y j = W j B j + Ω j
where Y j is the generated dataset, B j is the basis matrix, W j is the mixing matrix and is generated randomly using independent identically distributed (IID) zero mean uniform distribution, and Ω j R E × N is the additive White Gaussian Noise. It is important to note that the three dataset have been designed with different SNRs (5 dB, 10 dB, 15 dB). Assume that each dataset has three observations ( E = 3 ) with a sample length of 5000 ( N = 5000 ), as shown in Figure 2b. The mixing matrices are generated randomly, and each basis matrix contains four bases ( M = 4 ) .
In total, we have ten bases that are defined as follows:
b 1 = 2 cos ( 4 t ) + sin ( 3 t ) b 2 = sin ( 3.1 t + 120 ) + 0.25 cos ( 1.9 t 23 ) b 3 = 1.5 cos ( 4.1 t ) sin ( 2.3 t ) b 4 = 5 cos ( 3.2 t ) sin ( 5.1 t ) b 5 = 3 sin ( 2.1 t ) + 2.14 cos ( 6.1 t ) b 6 = sin ( 2.4 t ) + cos ( 3.3 t ) b 7 = 2 cos ( 2.2 t ) + sin ( 2.9 t ) b 8 = 1.3 cos ( 7.9 t ) + sin ( 6.3 t ) b 9 = 1.8 cos ( 8.1 t ) sin ( 5.9 t ) b 10 = 0.5 sin [ ( 5 t + 25 ) sin ( 7 t ) ]
where t R N × 1 is the time vector. To evaluate the performance of MieSoL in terms of basis extraction, Average Correlation Coefficient (ACC) is selected as indicators. ACC evaluates the degree of correlation between the extracted bases and the true bases and measures the average degree to which the estimated common bases and the true bases are correlated. ACC illustrates a value between negative one and positive one, where an ACC value of positive one refers to a perfect positive correlation between two variables.
We consider three individual cases for basis matrices. In the first case, the datasets are generated from one common basis and three uncommon bases; in the second case, the datasets are generated from two common bases and two uncommon bases; and, in the third case, the datasets are generated from three common bases and one uncommon basis, as follows:
Case 1:
B 1 = b 1 , b 2 , b 3 , b 4 B 2 = b 1 , b 5 , b 6 , b 7 B 3 = b 1 , b 8 , b 9 , b 10
Case 2:
B 1 = b 1 , b 2 , b 3 , b 4 B 2 = b 1 , b 2 , b 5 , b 6 B 3 = b 1 , b 2 , b 7 , b 8
Case 3:
B 1 = b 1 , b 2 , b 3 , b 4 B 2 = b 1 , b 2 , b 3 , b 5 B 3 = b 1 , b 2 , b 3 , b 6
An example of the second case is shown in Figure 2. According to (19), there is a space of basis with eight bases: two common bases ( m * = 2 ) and six uncommon bases (Figure 2a), from which two are involved in each basis matrix B i . Each dataset, shown in Figure 2b, is generated from two common bases and two uncommon bases.
The Average Correlation Coefficient (ACC) was employed as a metric to determine the denoising capability, with values closer to +1 indicating better performance. As is shown in Table 1, the performance of MieSoL is evaluated using ACC based on synthetic datasets, as the number of common bases varies. Table 1 shows the performance of three different methods of source separation, including the proposed method, in extracting the common bases for the generated three cases in (18), (19), and (20). The signal-to-noise ratios (SNRs) for the cases are 5 dB, 10 dB, and 15 dB, respectively, in this simulation, and each example uses three samples of the case for source separation. SNR measures the strength of the desired signal to the strength of the background noise. The higher SNR value indicates that the signal is less noisy, and, as the SNR value decreases, the noise power increases. MieSoL is compared with the well-known source separation method PCA/ICA, which combines PCA and ICA, as well as with a combination of PCA and Multiview ICA (MICA) [49], which is a more recent popular multimodal source separator. Note that both of these methods require the number of sources and, for that procedure, we used MSEE, which outperforms other existing numbers of source estimators, which are mostly thresholding approaches [59]. As the table shows, MieSoL outperforms the existing approaches by a noticeable difference and provides a very high accuracy, with ACCs close to one.
In the interesting second scenario, the considered data contain six samples: three are generated from Case 1 and three are generated from Case 2. To mimic the real life multimodal question, each case has its own SNR. The results in Table 2 are shown as the SNR as the second case increases. In these simulations, the SNR of the first case is fixed to 20 dB and the SNR of the three samples generated from the second case varies. For example, the first row is for when the SNRs of both cases are 20 dB. In the second row, the SNR of the second case is reduced to 15 dB and therefore the SNR difference is 5 dB. The SNR of the second case is then reduced to 10 and 5 dB for the next rows. As the table shows, the existing methods degrade noticeably with the mismatch of SNRs, whereas the performance of MieSoL is extremely robust to the difference in statistical properties of the two cases and outperforms existing approaches by an even larger margin. Note that, in both Table 1 and Table 2, the standard deviation of the proposed method is in a range of 0.002 to 0.006 of its mean, while those of the existing methods are in a range of 0.0006 to 0.003 of their mean. While this value is lower for the existing methods, all methods have negligible standard deviation.

3.2. Synthetic EEG and MRI Data

To examine the performance of MieSoL, synthetic EEG and MRI datasets are generated using the BESA software (BESA®|Brain Electrical Source Analysis—https://www.besa.de), version 7.0. This software has the ability to co-register EEG data with individual MRI data and to visualize dipole solutions developed by BESA Research in the individual anatomy. To generate the data, 10 sources for the EEG are used. Each EEG datapoint is generated with a resolution of 12 bits and sampling frequency 256 Hz, and the MRI is generated with 1.5l Tesla. The SNR of EEG is 5 dB, and the SNR of MRI is 12 dB. For the generated 12 cases, the locations of EEG sources are changed randomly.
To generate the required matrices in the form of Y j in Section 2.2.1, that is, the input of ULeV, the multimodal data must be preprocessed. Both EEG and MRI data are first normalized to have zero means and unit variances so that all modalities operate on a comparable numerical scale. Because ULeV requires all inputs to share the same matrix size, the EEG matrix and the MRI matrices are reshaped to a common fixed dimension, facilitating multimodal integration. After this size alignment, the matrices are concatenated to form a single unified data matrix, which is then provided as input to the ULeV module. This procedure ensures a consistent input structure across the modalities.
Average Error Estimation (AEE), which is commonly used for the comparison of source localizers, is shown in Table 3 for a comparison of methods. AEE measures the average estimated error between the estimated location of the sources and the true location of the sources. A low AEE value indicates that the coordination of the estimated location is close to the coordination of the true location, and a high AEE value indicates a remarkable difference between the estimated location and the desired location increases. The compared methods are source locator Efficient sLORETA (EHR-sLORETA), EHR-sLORETA with PCA/ICA preprocessing, or PCA/MICA Preprocessing for source separation and denoising procedure. Also, the proposed method is compared with common source extractor MieSoL with conventional sLORETA as well.

3.3. Real-World Patient Data

In this experiment, the performance of MieSoL is validated based on the real-world patients datasets, and the result of this experiment is verified by physicians. As we explain in Section 4, the results of this experiment are close to the medical diagnosis. EEG datasets and brain MRI scans of 26 patients are collected at the Epilepsy Clinic, Isfahan, Iran. Note that, due to the severity of the disease in these patients, they are candidates for epilepsy surgery according to the opinion of a specialist. EEG signals are recorded for 15 min through 19 electrodes at a sampling rate of 250 Hz with 12 bits of resolution, and MRI datasets are recorded with a resolution of 1.5 Tesla.
The MRI dataset of patients, with size 1024 × 1024 , are used to construct the patient’s brain mesh model with 15,000 vertices ( T = 15,000 ) , and the electrode placement of the patient’s EEG dataset is included in the brain model as well. In this experiment, MieSoL uses the EEG and MRI datasets of the subject to localize the epileptic sources in the subject’s brain, as shown in Figure 3, Figure 4 and Figure 5. In Figure 3, the green circle sign demonstrates the estimated location of the epileptic source (Figure 3a–c), and the topographic maps of these corresponding sources are represented as well (Figure 3d–f). In Figure 4, the red circles illustrate the estimated location of the epileptic sources, and the red condensed region represents the estimated location of the epileptic source in Figure 5. While the evaluation and performance results of the MieSoL for the synthetic datasets is shown in Table 3, a similar evaluation and the impact of MieSoL, compared with the existing methods, by measuring the AEE between the estimated location and the true location of the sources is represented in Table 4. The locations of the true sources are obtained based on the medical diagnosis of expert physicians.

4. Discussion

An automated approach was proposed to localize the epileptic sources in the brain by using both the EEG signals and MRI scans of the epileptic patients. The performance of MieSoL was evaluated by using both synthesized and real-world datasets. The experimental results validated that MieSoL was able to localize epileptic sources in the brain with a high accuracy. The recorded neural activities were denoised to remove the contributed noise and motion artifacts, and to extract the mutual information between modalities (EEG and MRI). The multimodal seizure-tuned EEG signals were investigated to detect the excessive electrical impulses and estimate the location of the seizure discharges.
In this study, MieSoL was examined to remove the noisy part of the EEG signals without loosing the critical information and to extract the mutual information between EEG and MRI. This method showed a remarkable performance in terms of denoising compared to the other denoisers, such as MSEE+PCA+IC, and MSEE+PCA+MICA, as shown in Table 2. Moreover, in terms of extracting the mutual information, MieSoL performed significantly more accurately, compared to other methods, as shown in Table 1. The inverse problem, EHR-sLORETA, with a constructed head model with 15,000 vertices ( T = 15,000 ) based on the MRI dataset, was used to localize the epileptic sources.
The performance of MieSoL was evaluated by computing the ACC between the estimated and desired common bases, as are shown in Table 1. According to the fact that, in real applications, different modalities have different SNR values, MieSoL was evaluated for varying SNR difference between the two datasets that are shown in Table 2. As is shown in the table, MieSoL performed amazingly robustly in the case of having a low number of common bases, as well as the case of having a huge SNR gap between two datasets, while the existing methods degrade drastically with the SNR difference between the datasets. As is illustrated in Table 1, three different methods, including MSEE+PCA+ICA, MSEE+PCA+MICA, and MieSoL, were used to extract the common bases among the synthetic datasets. As observed, the ACC value and the number of common bases were directly related. However, the decrease of ACC value for MieSoL was small compared to the two other methods. In addition, ACC values of MieSoL were higher than the intended methods for different numbers of common bases. According to the above observations, MieSoL was able to perform as a remarkable basis extractor by successfully extracting the common bases, which were highly correlated to the true bases from multiple datasets.
The same three methods that have been used in Table 1 are used in Table 2 as well. As observed in Table 2, the ACC between the extracted common bases and the true common bases decreased as the SNR difference between two datasets increased. However, the ACC value changed slightly for MieSoL compared to the two other methods. This indicated that MieSoL demonstrated a robust performance in the case of having a huge SNR difference between two datasets. In compliance with the obtained information from Table 1 and Table 2, MieSoL could be used as an exceptional multimodal simultaneous denoiser and best basis selection to estimate the noiseless dataset and to extract the mutual information between modalities. In the next part of the study, the performance of MieSoL was tested using the synthesized EEG and MRI datasets as the location and orientation of sources varied. As is shown in Table 3, five different methods, EHR-sLORETA, MSEE+PCA+ICA+EHR-sLORETA, MSEE+PCA+MICA+EHR-sLORETA, ULeV+SLORETA, and MieSoL, were used to localize the sources. The performance of these methods was evaluated by measuring AEE between the estimated location and the true location of the sources. According to Table 3, MieSoL exhibited a remarkable performance compared to the existing methods in the sense of the number of source identifications and AEE. This was accomplished by observing that MieSoL had the lowest AEE score for different sources. In addition, the minimal AEE values of MieSoL presented convincing evidence that this method was able to estimate the location of the sources with high confidence to the desired location. It is worth mentioning that the four other methods indicated inadequate performance when the sources were located in the underlying layers of the brain, such as case numbers 3, 6, and 12. As the table shows, another important advantage of MieSoL over the existing methods was in its reliability in finding the true number of sources. As was shown while for changes to the number of sources, MSEE+PCA+ICA+EHR-sLORETA and MSEE+PCA+MICA+EHR-sLORETA over- or underestimated the number of true sources, while our proposed method consistently chose the correct number of true sources in all of the scenarios. To this end, the performance of MieSoL was evaluated using the synthetic datasets.
Next, the performance of the proposed methods was validated using the EEG and MRI of 26 epileptic patients. MieSoL was used to localize the epileptic sources in an epileptic patient based on the EEG signals and MRI scans of the patient. As is shown in Figure 3a, MieSoL was able to detect a focal seizure in the left parietal lobe of Subject 1’s brain. Based on the corresponding topographic map (Figure 3d), the seizure originated in the inferior parietal lobe and oriented toward the right central. For Subject 2, a focal seizure was discovered in the superior parietal lobe of the brain (Figure 3b). For this subject, the seizure started in the superior parietal lobe and extended to the superior occipital lobe of the brain. For Subject 3, MieSoL localized a focal seizure in the right frontal lobe of the brain (Figure 3c). According to the corresponding topographic map (Figure 3f), the seizure was generated in the right frontal lobe and spread toward the right parietal lobe. Moreover, MieSoL detected two sources of seizure in the right central and the right frontal of the subject’s brain, which are indicated by red circles (Figure 4). By analyzing the density of the epileptic sources obtained through MieSoL, it was illustrated that the density of the seizure source in the right central was higher than the density of the seizure source in the right frontal; therefore, we could conclude that the seizure originated from the right central of the brain and went all the way to right frontal of the brain. In Figure 5, a similar situation was observed for another subject, where MieSoL localized the origin of the seizure source in the left lateral and followed its expansion to the frontal of the brain, which is indicated by a red condensed region. In addition, the beneath power graph represents the power of the source. The obtained findings for all of the subjects (including the above observations) were numerically compared with the outcomes of the intended methods by calculating AEE value (Table 4). As the table shows, the AEE value of the proposed method was minimal compared to the other approaches. It is interesting to note not only that the proposed method outperforms the other methods in the sense of AEE, but also that the provided minimum AEE is based on a lower number of sources (location) for the seizure. In other words, the proposed method detects a lower number of sources for the seizure, which is expected to be more realistic. Note that in all the above applications, the value of p in ULeV in (3) is adopted to be 15, similar to what is suggested in [61]. For all the cases, varying the value of p for a range between 1 and 20 confirmed that a value of 15 shows a good performance, with values growing up to 20 showing a negligible improvement.
We discussed our results with physicians and, in conclusion, our results were outstandingly close to the medical diagnosis. The detected seizure locations were consistently in the zones detected by the physicians, and the detected behavior of the seizure was considered to be informative. As a result, MieSoL was able to automatically localize the epileptic sources as the seizure location and size varied in the brain with a high accuracy. In addition, by analyzing the density of the estimated epileptic sources, it was shown that MieSoL was able to estimate the orientation of the epileptic sources as well. According to the findings, MieSoL was a reliable technique to estimate the location and orientation of the epileptic sources with a high accuracy. The MieSoL results are due to the efficient extraction of mutual information of both the EEG and the MRI datasets of an epileptic patient that could effectively localize the epileptic sources in the patient’s brain.

5. Conclusions

This paper proposed an automated epileptic localization method based on processing and analyzing the epileptic EEG signals and the epileptic brain MRI scans combined. Common seizure bases were extracted from the epileptic EEG and MRI datasets using ULeV, a novel and newly proposed multimodal basis extraction method. The results for both synthetic and real data were promising. The approach has a higher precision in locating the seizure sources compared to the state-of-the-art source localizing methods.
The proposed method has three main components. The first step performs dimension reduction, the second step concentrates on source separation and the selection of the number of sources, and the last one performs the source localization. The first and second components outperform (in the sense of ACC) the existing approaches. Also, due to the multimodality signal processing and extraction of the joint information of EEG and MRI, the source localization approach notably outperforms the available methods in the sense of AEE. The MieSoL has shown full accuracy in detecting the correct number of sources for the synthetic data. It is worth mentioning that, even though the method optimizes the desired AEE over the existing approaches, the average number of sources chosen by the proposed approach for the seizure is less than those proposed by the existing methods. In addition the method demonstrated robust performance in terms of reliability and accuracy for both the synthetic and real data, even when the multimodal data is not recorded simultaneously. MieSoL shows great potential for future applications in the diagnosis of other diseases by extracting and utilizing the mutual information of any combination of available signals such as EEG, MRI, ECG, and fMRI. By supporting both limited-data scenarios and asynchronous acquisitions, MieSoL is expected to provide an automated EEG–MRI multimodal analysis solution for cases where traditional approaches often fail.

Author Contributions

Conceptualization, S.B., E.N. and Y.S.-N.; Methodology, S.B., E.N. and Y.S.-N.; Software, E.N. and Y.S.-N.; Validation, S.B., E.N., Y.S.-N. and Y.N.; Formal analysis, S.B., E.N. and Y.N.; Investigation, E.N., Y.S.-N. and Y.N.; Resources, S.B., E.N. and Y.S.-N.; Data curation, E.N.; Writing—original draft, E.N.; Writing—review & editing, S.B., E.N., Y.S.-N. and Y.N.; Visualization, E.N. and Y.S.-N.; Supervision, S.B.; Project administration, S.B. and Y.N.; Funding acquisition, S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by NSERC grant number RGPIN-2024-05366.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. The data provided to the authors were fully anonymized by the Epilepsy Clinic prior to analysis. A blank copy of the clinic’s consent form has been provided to the journal editor.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Arabi, H.; Ahmadi, A.; Akhavanallaf, M.; Arab, A.; Zaidi, S.; Ghaffarian, A. Comparative study of algorithms for synthetic CT generation from MRI: Consequences for MRI-guided radiation planning in the pelvic region. Med. Phys. 2018, 45, 5218–5233. [Google Scholar] [CrossRef]
  2. Chen, Y.; Yang, Q.; Ma, X. Brain MRI super resolution using 3D deep densely connected neural networks. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Washington, DC, USA, 4–7 April 2018. [Google Scholar] [CrossRef]
  3. Shen, F.; Peng, Y.; Dai, G.; Lu, B.; Kong, W. Coupled Projection Transfer Metric Learning for Cross-Session Emotion Recognition from EEG. Systems 2022, 10, 47. [Google Scholar] [CrossRef]
  4. Canuet, L.; Ishii, R.; Pascual-Marqui, R.D.; Iwase, M.; Kurimoto, R.; Aoki, Y.; Ikeda, S.; Takahashi, H.; Nakahachi, T.; Takeda, M. Resting-state EEG source localization and functional connectivity in schizophrenia-like psychosis of epilepsy. PLoS ONE 2011, 6, e27863. [Google Scholar] [CrossRef]
  5. Reisberg, B.; Prichep, L.; Mosconi, L.; John, E.R.; Glodzik-Sobanska, L.; Boksay, I.; Monteiro, I.; Torossian, C.; Vedvyas, A.; Ashraf, N.; et al. The pre–mild cognitive impairment, subjective cognitive impairment stage of Alzheimer’s disease. Alzheimer’s Dement. 2008, 4, S98–S108. [Google Scholar] [CrossRef]
  6. Jiang, Z.; Chung, F.; Wang, S. Recognition of multiclass epileptic EEG signals based on knowledge and label space inductive transfer. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 630–642. [Google Scholar] [CrossRef]
  7. Stergiadis, C.; Halliday, D.M.; Kazis, D.; Klados, M.A. Functional connectivity of interictal iEEG and the connectivity of high-frequency components in epilepsy. Brain Organoid Syst. Neurosci. J. 2023, 1, 3–12. [Google Scholar] [CrossRef]
  8. Al-Salman, W.; Li, Y.; Wen, P.; Miften, F.S.; Oudah, A.Y.; Ghayab, H.R.A. Extracting epileptic features in EEGs using a dual-tree complex wavelet transform coupled with a classification algorithm. Brain Res. 2022, 1779, 147777. [Google Scholar] [CrossRef]
  9. Karasmanoglou, A.; Giannakakis, G.; Vorgia, P.; Antonakakis, M.; Zervakis, M. Semi-Supervised anomaly detection for the prediction and detection of pediatric focal epileptic seizures on fused EEG and ECG data. Biomed. Signal Process. Control 2025, 101, 107083. [Google Scholar] [CrossRef]
  10. Yazid, M.; Ibrahim, A.; Aziz, S. Simple detection of epilepsy from EEG signal using local binary pattern transition histogram. IEEE Access 2021, 9, 150252–150267. [Google Scholar] [CrossRef]
  11. Abugabah, A.; Abouhawwash, M.; Alsaade, A. Brain epilepsy seizure detection using bio-inspired krill herd and artificial alga optimized neural network approaches. J. Ambient Intell. Humaniz. Comput. 2021, 12, 3317–3328. [Google Scholar] [CrossRef]
  12. Sharaf, A.I.; El-Soud, M.A.; El-Henawy, I.M. An automated approach for epilepsy detection based on tunable Q-wavelet and firefly feature selection algorithm. Int. J. Biomed. Imaging 2018, 2018, 5812872. [Google Scholar] [CrossRef] [PubMed]
  13. Omidvar, M.; Zahedi, A.; Bakhshi, H. EEG signal processing for epilepsy seizure detection using 5-level Db4 discrete wavelet transform, GA-based feature selection and ANN/SVM classifiers. J. Ambient Intell. Humaniz. Comput. 2021, 12, 10395–10403. [Google Scholar] [CrossRef]
  14. Sukriti; Chakraborty, M.; Mitra, D. Epilepsy seizure detection using kurtosis based VMD’s parameters selection and bandwidth features. Biomed. Signal Process. Control 2021, 64, 102255. [Google Scholar] [CrossRef]
  15. Stamoulis, C.; Chang, J.; Johnson, R. Noninvasive seizure localization with single-photon emission computed tomography is impacted by preictal/early ictal network dynamics. IEEE Trans. Biomed. Eng. 2019, 66, 1863–1871. [Google Scholar] [CrossRef]
  16. Mannan, M.M.N.; Rahman, A.H.M.; Ali, M.A. Hybrid EEG–eye tracker: Automatic identification and removal of eye movement and blink artifacts from electroencephalographic signal. Sensors 2016, 16, 241. [Google Scholar] [CrossRef]
  17. Yedurkar, D.P.; Metkar, S.P. Multiresolution approach for artifacts removal and localization of seizure onset zone in epileptic EEG signal. Biomed. Signal Process. Control 2020, 57, 101794. [Google Scholar] [CrossRef]
  18. Keshava, M.G.N.; Ahmed, K.Z. Correction of ocular artifacts in EEG signal using empirical mode decomposition and cross-correlation. Res. J. Biotechnol. 2014, 9, 24–29. [Google Scholar]
  19. Shoji, T.; Yoshida, N.; Tanaka, T. Automated detection of abnormalities from an EEG recording of epilepsy patients with a compact convolutional neural network. Biomed. Signal Process. Control 2021, 70, 103013. [Google Scholar] [CrossRef]
  20. Jafadideh, A.T.; Asl, B.M. A new data covariance matrix estimation for improving minimum variance brain source localization. Comput. Biol. Med. 2022, 143, 105324. [Google Scholar] [CrossRef] [PubMed]
  21. Cherian, R.; Kanaga, E.G. Theoretical and methodological analysis of EEG based seizure detection and prediction: An exhaustive review. J. Neurosci. Methods 2022, 369, 109483. [Google Scholar] [CrossRef] [PubMed]
  22. Wu, M.; Zhang, X.; Li, Y. A new localization method for epileptic seizure onset zones based on time-frequency and clustering analysis. Pattern Recognit. 2021, 111, 107675. [Google Scholar] [CrossRef]
  23. Mierlo, P.V.; Carrette, S.; Vonck, K. Ictal EEG source localization in focal epilepsy: Review and future perspectives. Clin. Neurophysiol. 2020, 131, 2600–2616. [Google Scholar] [CrossRef]
  24. Staljanssens, W.; Carrette, B.; Meurs, A. Seizure onset zone localization from ictal high-density EEG in refractory focal epilepsy. Brain Topogr. 2016, 30, 257–271. [Google Scholar] [CrossRef] [PubMed]
  25. Fu, X.; Yang, Y.; Wang, T. Integrating optimized multiscale entropy model with machine learning for the localization of epileptogenic hemisphere in temporal lobe epilepsy using resting-state fMRI. J. Healthc. Eng. 2021, 2021, 1834123. [Google Scholar] [CrossRef]
  26. Tsiakiri, A.; Panagiotopoulos, S.; Tzallas, D. Mapping brain networks and cognitive functioning after stroke: A systematic review. Brain Organoid Syst. Neurosci. J. 2024, 2, 43–52. [Google Scholar] [CrossRef]
  27. Bacher, D.; Amini, A.; Friedman, D.; Doyle, W.; Pacia, S.; Kuzniecky, R. Validation of an EEG seizure detection paradigm optimized for clinical use in a chronically implanted subcutaneous device. J. Neurosci. Methods 2021, 358, 109220. [Google Scholar] [CrossRef] [PubMed]
  28. Chen, M.; Guo, K.; Lu, K.; Meng, K.; Lu, J.; Pang, Y.; Zhang, L.; Hu, Y.; Yu, R.; Zhang, R. Localizing the seizure onset zone and predicting the surgery outcomes in patients with drug-resistant epilepsy: A new approach based on the causal network. Comput. Methods Programs Biomed. 2025, 258, 108483. [Google Scholar] [CrossRef] [PubMed]
  29. Jiao, M.; Yang, S.; Xian, X.; Fotedar, N.; Liu, F. Multi-Modal Electrophysiological Source Imaging With Attention Neural Networks Based on Deep Fusion of EEG and MEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 2492–2502. [Google Scholar] [CrossRef]
  30. Tajmirriahi, M.; Rabbani, H. A review of EEG-based localization of epileptic seizure foci: Common points with multimodal fusion of brain data. J. Med. Signals Sens. 2024, 14, 19. [Google Scholar] [CrossRef]
  31. Sadjadi, S.M.; Ebrahimzadeh, E.; Shams, M.; Seraji, M.; Soltanian-Zadeh, H. Localization of epileptic foci based on simultaneous EEG–fMRI data. Front. Neurol. 2021, 12, 645594. [Google Scholar] [CrossRef]
  32. Yağmur, F.D.; Sertbaş, A. A High-Performance Method Based on Features Fusion of EEG Brain Signal and MRI-Imaging Data for Epilepsy Classification. Meas. Sci. Rev. 2024, 24, 1–8. [Google Scholar] [CrossRef]
  33. Karimi-Rouzbahani, H.; Vogrin, S.; Cao, M.; Plummer, C.; McGonigal, A. Multimodal and quantitative analysis of the epileptogenic zone network in the pre-surgical evaluation of drug-resistant focal epilepsy. Neurophysiol. Clin. 2024, 54, 103021. [Google Scholar] [CrossRef]
  34. Wirsich, J.; Iannotti, G.R.; Ridley, B.; Shamshiri, E.A.; Sheybani, L.; Grouiller, F.; Bartolomei, F.; Seeck, M.; Lazeyras, F.; Ranjeva, J.P.; et al. Altered correlation of concurrently recorded EEG-fMRI connectomes in temporal lobe epilepsy. Netw. Neurosci. 2024, 8, 466–485. [Google Scholar] [CrossRef] [PubMed]
  35. Hekmati, R.; Azencott, R.; Zhang, W.; Chu, Z.D.; Paldino, M.J. Localization of epileptic seizure focus by computerized analysis of fMRI recordings. Brain Inform. 2020, 7, 13. [Google Scholar] [CrossRef] [PubMed]
  36. Liu, D.; Pang, Z.; Wang, Z. Epileptic seizure prediction by a system of particle filter associated with a neural network. EURASIP J. Adv. Signal Process. 2009, 2009, 638534. [Google Scholar] [CrossRef]
  37. Kabir, E.; Siuly; Zhang, Y. Epileptic seizure detection from EEG signals using logistic model trees. Brain Inform. 2016, 3, 93–100. [Google Scholar] [CrossRef] [PubMed]
  38. Kunekar, P.; Gupta, M.K.; Gaur, P. Detection of epileptic seizure in EEG signals using machine learning and deep learning techniques. J. Eng. Appl. Sci. 2024, 71, 21. [Google Scholar] [CrossRef]
  39. Kode, H.; Elleithy, K.; Almazaydeh, L. Epileptic seizure detection in EEG signals using machine learning and deep learning techniques. IEEE Access 2024, 12, 80657–80668. [Google Scholar] [CrossRef]
  40. Zhou, W.; Zheng, W.; Feng, Y.; Li, X. LMA-EEGNet: A Lightweight Multi-Attention Network for Neonatal Seizure Detection Using EEG signals. Electronics 2024, 13, 2354. [Google Scholar] [CrossRef]
  41. Ouichka, O.; Echtioui, A.; Hamam, H. Deep Learning Models for Predicting Epileptic Seizures Using iEEG Signals. Electronics 2022, 11, 605. [Google Scholar] [CrossRef]
  42. Liu, S.; Zhou, Y.; Yang, X.; Wang, X.; Yin, J. A Robust Automatic Epilepsy Seizure Detection Algorithm Based on Interpretable Features and Machine Learning. Electronics 2024, 13, 2727. [Google Scholar] [CrossRef]
  43. Ghazali, S.M.; Alizadeh, M.; Mazloum, J.; Baleghi, Y. Modified binary salp swarm algorithm in EEG signal classification for epilepsy seizure detection. Biomed. Signal Process. Control 2022, 78, 103858. [Google Scholar] [CrossRef]
  44. Qaisar, S.M.; Subasi, A. Effective epileptic seizure detection based on the event-driven processing and machine learning for mobile healthcare. J. Ambient. Intell. Humaniz. Comput. 2020, 13, 3619–3631. [Google Scholar] [CrossRef]
  45. Subasi, A.; Kevric, J.; Canbaz, M.A. Epileptic seizure detection using hybrid machine learning methods. Neural Comput. Appl. 2019, 31, 317–325. [Google Scholar] [CrossRef]
  46. Oliva, J.T.; Rosa, J.L.G. Binary and multiclass classifiers based on multitaper spectral features for epilepsy detection. Biomed. Signal Process. Control 2021, 66, 102469. [Google Scholar] [CrossRef]
  47. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [Google Scholar] [CrossRef] [PubMed]
  48. Guo, L.; Rivero, D.; Dorado, J.; Rabuñal, J.R.; Pazos, A. Automatic epileptic seizure detection in EEGs based on line length feature and artificial neural networks. J. Neurosci. Methods 2010, 191, 101–109. [Google Scholar] [CrossRef] [PubMed]
  49. Callan, D.E.; Callan, A.M.; Kroos, C.; Vatikiotis-Bateson, E. Multimodal contribution to speech perception revealed by independent component analysis: A single-sweep EEG case study. Cogn. Brain Res. 2001, 10, 349–353. [Google Scholar] [CrossRef] [PubMed]
  50. Ebrahimzadeh, E.; Shams, M.; Fayaz, F.; Rajabion, L.; Mirbagheri, M.; Araabi, B.N.; Soltanian-Zadeh, H. Quantitative determination of concordance in localizing epileptic focus by component-based EEG-fMRI. Comput. Methods Programs Biomed. 2019, 177, 231–241. [Google Scholar] [CrossRef]
  51. Chang, C.; Chen, J.E. Multimodal EEG-fMRI: Advancing insight into large-scale human brain dynamics. Curr. Opin. Biomed. Eng. 2021, 18, 100279. [Google Scholar] [CrossRef]
  52. Henson, R.N.; Abdulrahman, H.; Flandin, G.; Litvak, V. Multimodal integration of M/EEG and f/MRI data in SPM12. Front. Neurosci. 2019, 13, 300. [Google Scholar] [CrossRef]
  53. ElSayed, N.E.; Tolba, A.S.; Rashad, M.Z.; Belal, T.; Sarhan, S. Multimodal analysis of electroencephalographic and electrooculographic signals. Comput. Biol. Med. 2021, 137, 104809. [Google Scholar] [CrossRef]
  54. Conradsen, I.; Beniczky, S.; Wolf, P.; Kjaer, T.W.; Sams, T.; Sorensen, H.B. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data. Comput. Methods Programs Biomed. 2012, 107, 97–110. [Google Scholar] [CrossRef]
  55. Ham, S.M.; Lee, H.M.; Lim, J.H.; Seo, J. A Negative Emotion Recognition System with Internet of Things-Based Multimodal Biosignal Data. Electronics 2023, 12, 4321. [Google Scholar] [CrossRef]
  56. Sadat-Nejad, Y.; Beheshti, S. Efficient high resolution sLORETA in brain source localization. J. Neural Eng. 2021, 18, 016013. [Google Scholar] [CrossRef]
  57. Sui, J.; Adali, T.; Yu, Q.; Chen, J.; Calhoun, V.D. A review of multivariate methods for multimodal fusion of brain imaging data. J. Neurosci. Methods 2012, 204, 68–81. [Google Scholar] [CrossRef]
  58. Akaho, S.; Kiuchi, Y.; Umeyama, S. MICA: Multimodal independent component analysis. In Proceedings of the IJCNN’99: International Joint Conference on Neural Networks, Washington, DC, USA, 10–16 July 1999; Proceedings (Cat. No.99CH36339). Volume 2, pp. 927–932. [Google Scholar] [CrossRef]
  59. Beheshti, S.; Sedghizadeh, S. Number of source signal estimation by the mean squared eigenvalue error. IEEE Trans. Signal Process. 2018, 66, 5694–5704. [Google Scholar] [CrossRef]
  60. Ge, Z.; Song, Z. Online monitoring of nonlinear multiple mode processes based on adaptive local model approach. Control Eng. Pract. 2008, 16, 1427–1437. [Google Scholar] [CrossRef]
  61. Naghsh, E.; Danesh, M.; Beheshti, S. Unified left eigenvector (ULEV) for blind source separation. Electron. Lett. 2022, 58, 41–43. [Google Scholar] [CrossRef]
  62. Beheshti, S.; Shamsi, M. ϵ-confidence approximately correct (ϵ-coac) learnability and hyperparameter selection in linear regression modeling. IEEE Access 2025, 13, 14273–14289. [Google Scholar] [CrossRef]
  63. Shamsi, M.; Beheshti, S. Relative entropy (RE)-based LTI system modeling equipped with simultaneous time delay estimation and online modeling. IEEE Access 2023, 11, 113885–113899. [Google Scholar] [CrossRef]
  64. Beheshti, S.; Nidoy, E.; Rahman, F. K-mace and kernel k-mace clustering. IEEE Access 2020, 8, 17390–17403. [Google Scholar] [CrossRef]
  65. Beheshti, S.; Bommanahally, V. Minimum Mismatch Modeling (3M) Hyperparameter Selection in Autoregressive Moving Average (ARMA) Modeling. IEEE Access 2025, 13, 133681–133693. [Google Scholar] [CrossRef]
  66. Bayati, K.; Umapathy, K.; Beheshti, S. Reliable truncation parameter selection and model order estimation for stochastic subspace identification. J. Frankl. Inst. 2025, 362, 107766. [Google Scholar] [CrossRef]
  67. Dong, G.; Basu, A. Medical Image Denosing via Explainable AI Feature Preserving Loss. In Proceedings of the Smart Multimedia, Paris, France, 19–21 December 2025; pp. 17–32. [Google Scholar]
Figure 1. Block diagram of MieSoL.
Figure 1. Block diagram of MieSoL.
Electronics 14 04897 g001
Figure 2. (a) Basis space of Case 2 with two common bases b 1 and b 2 (shown in red) and two uncommon bases from the other six bases b 3 b 8 (shown in blue). (b) Three samples of the synthesized datasets for the three cases ( Y 1 , Y 2 , Y 3 ). Each dataset Y i has three observations in three colors.
Figure 2. (a) Basis space of Case 2 with two common bases b 1 and b 2 (shown in red) and two uncommon bases from the other six bases b 3 b 8 (shown in blue). (b) Three samples of the synthesized datasets for the three cases ( Y 1 , Y 2 , Y 3 ). Each dataset Y i has three observations in three colors.
Electronics 14 04897 g002
Figure 3. (ac) Estimated focal seizure representations in the brain respectively for S u b j e c t 1 to 3. The green arrow indicates the fitted dipole location and orientation for each subject. (df) Corresponding scalp topographic maps of the estimated focal seizure activity respectively for S u b j e c t 1 to 3. The black arrow depicts the orientation of the estimated dipole current flow.
Figure 3. (ac) Estimated focal seizure representations in the brain respectively for S u b j e c t 1 to 3. The green arrow indicates the fitted dipole location and orientation for each subject. (df) Corresponding scalp topographic maps of the estimated focal seizure activity respectively for S u b j e c t 1 to 3. The black arrow depicts the orientation of the estimated dipole current flow.
Electronics 14 04897 g003
Figure 4. Epileptic source localization representation for Subject 2. The six colored pins represent six distinct estimated neural sources. The two red circles highlight the subset of these sources identified as seizure-generating regions.
Figure 4. Epileptic source localization representation for Subject 2. The six colored pins represent six distinct estimated neural sources. The two red circles highlight the subset of these sources identified as seizure-generating regions.
Electronics 14 04897 g004
Figure 5. Epileptic topographic map of a subject.
Figure 5. Epileptic topographic map of a subject.
Electronics 14 04897 g005
Table 1. ACC between the common bases and the estimated common bases as the number of common bases varies for Case 1, Case 2, and Case 3. (Averaged over 100 runs).
Table 1. ACC between the common bases and the estimated common bases as the number of common bases varies for Case 1, Case 2, and Case 3. (Averaged over 100 runs).
Number ofMSEE+MSEE+MSEE+
Common BasesPCA+ICAPCA+MICAULeV
10.650.740.96
20.780.890.97
30.830.910.98
Table 2. ACC between the common bases and the estimated common bases as the SNR distinction varies (first dataset has SNR of 5 dB).
Table 2. ACC between the common bases and the estimated common bases as the SNR distinction varies (first dataset has SNR of 5 dB).
SNR DifferenceMSEE+MSEE+MSEE+
(dB)PCA+ICAPCA+MICAULeV
00.670.770.98
50.620.640.97
100.520.600.93
150.350.420.88
Table 3. Estimated number of sources and Average Error Estimation (AEE) between the location of 10 true sources and the estimated sources. Numbers are shown in form of pairs as (Estimated number of sources, AEE).
Table 3. Estimated number of sources and Average Error Estimation (AEE) between the location of 10 true sources and the estimated sources. Numbers are shown in form of pairs as (Estimated number of sources, AEE).
# MSEEMSEEMSEEMieSoL
EHR-sLORETA+PCA+ICA+PCA+MICAULeV
+EHR-sLORETA+EHR-sLORETA+sLORETA
1(7, 6.32)(7, 5.30)(10, 3.34)(9, 2.08)(10, 1.22)
2(7, 7.68)(7, 5.88)(11, 4.29)(9, 2.42)(10, 1.44)
3(4, 9.42)(3, 8.62)(13, 6.12)(11, 3.65)(10, 2.23)
4(9, 7.12)(7, 5.52)(10, 4.03)(9, 2.19)(10, 1.21)
5(8, 8.24)(6, 6.07)(11, 4.54)(9, 2.72(10, 1.80)
6(6, 10.14)(3, 8.14)(12, 5.67)(11, 3.34)(10, 2.26)
7(8, 7.02)(7, 5.16)(10, 3.61)(10, 1.89)(10, 1.01)
8(7, 11.23)(5, 7.97)(12, 5.44)(11, 3.55)(10, 2.10)
9(8, 7.78)(7, 5.26)(13, 6.62)(10, 1.99)(10, 1.22)
10(8, 8.09)(7, 5.93)(11, 4.26)(9, 2.55)(10, 1.25)
11(7, 7.23)(6, 6.06)(11, 4.48)(9, 2.67)(10, 1.22)
12(5, 10.76)(3, 8.45)(10, 5.93)(11, 3.28)(10, 2.03)
Table 4. Estimated number of sources and Average Error Estimation (AEE) between the location of the true sources and the estimated sources (Estimated number of sources, AEE).
Table 4. Estimated number of sources and Average Error Estimation (AEE) between the location of the true sources and the estimated sources (Estimated number of sources, AEE).
#MSEEMSEEMSEEMieSoL
+PCA+ICA+PCA+MICAULeV
+EHR-sLORETA+EHR-sLORETA+sLORETA
1(1, 5.65)(3, 4.02)(3, 2.23)(2, 1.45)
2(2, 6.61)(3, 4.53)(2, 2.75)(2, 1.82)
3(2, 8.74)(4, 6.45)(2, 3.98)(1, 2.49)
4(3, 5.85)(3, 4.26)(4, 2.42)(2, 1.70)
5(1, 6.40)(3, 4.87)(2, 3.05)(1, 1.09)
6(1, 8.47)(4, 6.33)(2, 3.63)(2, 2.68)
7(2, 5.35)(2, 3.94)(3, 2.21)(3, 1.27)
8(4, 8.30)(4, 5.68)(3, 3.50)(3, 2.33)
9(1, 5.59)(3, 4.48)(4, 2.32)(3, 1.45)
10(4, 6.26)(4, 4.59)(4, 2.92)(3, 1.98)
11(2, 6.39)(4, 4.81)(2, 3.03)(2, 2.05)
12(3, 8.78)(3, 6.24)(2, 3.61)(1, 2.97)
13(3, 5.57)(5, 3.95)(4, 2.27)(2, 1.42)
14(1, 6.15)(3, 4.51)(3, 2.70)(1, 1.83)
15(1, 8.43)(2, 5.89)(3, 3.24)(1, 2.52)
16(1, 6.19)(3, 4.61)(3, 2.82)(1, 1.89)
17(2, 8.44)(2, 5.58)(4, 3.04)(2, 1.85)
18(1, 8.15)(2, 5.62)(4, 3.39)(3, 2.37)
19(4, 5.49)(2, 3.81)(4, 2.16)(3, 1.41)
20(2, 7.86)(3, 5.38)(4, 3.10)(2, 2.03)
21(3, 5.82)(4, 3.65)(3, 1.93)(3, 1.08)
22(3, 5.36)(3, 3.88)(3, 2.01)(3, 1.27)
23(2, 5.61)(4, 4.12)(2, 2.43)(3, 1.35)
24(1, 6.11)(5, 4.67)(3, 2.99)(2, 1.72)
25(3, 7.56)(5, 5.47)(4, 3.20)(2, 1.98)
26(1, 8.56)(3, 5.84)(4, 3.23)(3, 1.82)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Beheshti, S.; Naghsh, E.; Sadat-Nejad, Y.; Naderahmadian, Y. Multimodal Mutual Information Extraction and Source Detection with Application in Focal Seizure Localization. Electronics 2025, 14, 4897. https://doi.org/10.3390/electronics14244897

AMA Style

Beheshti S, Naghsh E, Sadat-Nejad Y, Naderahmadian Y. Multimodal Mutual Information Extraction and Source Detection with Application in Focal Seizure Localization. Electronics. 2025; 14(24):4897. https://doi.org/10.3390/electronics14244897

Chicago/Turabian Style

Beheshti, Soosan, Erfan Naghsh, Younes Sadat-Nejad, and Yashar Naderahmadian. 2025. "Multimodal Mutual Information Extraction and Source Detection with Application in Focal Seizure Localization" Electronics 14, no. 24: 4897. https://doi.org/10.3390/electronics14244897

APA Style

Beheshti, S., Naghsh, E., Sadat-Nejad, Y., & Naderahmadian, Y. (2025). Multimodal Mutual Information Extraction and Source Detection with Application in Focal Seizure Localization. Electronics, 14(24), 4897. https://doi.org/10.3390/electronics14244897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop