Next Article in Journal
Drawing the Surgical Blueprint: Evaluating ChatGPT Versus Gemini Across Diverse Plastic Aesthetic Procedures
Previous Article in Journal
Short-Term Outcomes of a Novel Fascio-Aponeurotic Flap Technique for Ulnar Nerve Instability at the Elbow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches

by
Sayantan Ghosh
1,†,
Padmanabhan Sindhujaa
2,†,
Dinesh Kumar Kesavan
3,
Balázs Gulyás
4,5 and
Domokos Máthé
6,*
1
Department of Integrative Biology, Vellore Institute of Technology, Vellore 632014, India
2
PSG Institute of Medical Sciences & Research, Coimbatore 641004, India
3
School of Material Science and Engineering, Nanyang Technological University, Singapore 308232, Singapore
4
Centre for Neuroimaging Research, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 308232, Singapore
5
Department of Clinical Neuroscience, Karolinska Institute, 17176 Stockholm, Sweden
6
Department of Biophysics and Radiation Biology, Semmelweis University, 1085 Budapest, Hungary
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Surgeries 2025, 6(3), 50; https://doi.org/10.3390/surgeries6030050
Submission received: 30 March 2025 / Revised: 13 June 2025 / Accepted: 19 June 2025 / Published: 26 June 2025

Abstract

Background: BCI and AI-driven image segmentation are revolutionizing precision neurosurgery by enhancing surgical accuracy, reducing human error, and improving patient outcomes. Methods: This systematic review explores the integration of AI techniques—particularly DL and CNNs—with neuroimaging modalities such as MRI, CT, EEG, and ECoG for automated brain mapping and tissue classification. Eligible clinical and computational studies, primarily published between 2015 and 2025, were identified via PubMed, Scopus, and IEEE Xplore. The review follows PRISMA guidelines and is registered with the OSF (registration number: J59CY). Results: AI-based segmentation methods have demonstrated Dice similarity coefficients exceeding 0.91 in glioma boundary delineation and tumor segmentation tasks. Concurrently, BCI systems leveraging EEG and SSVEP paradigms have achieved information transfer rates surpassing 22.5 bits/min, enabling high-speed neural decoding with sub-second latency. We critically evaluate real-time neural signal processing pipelines and AI-guided surgical robotics, emphasizing clinical performance and architectural constraints. Integrated systems improve targeting precision and postoperative recovery across select neurosurgical applications. Conclusions: This review consolidates recent advancements in BCI and AI-driven medical imaging, identifies barriers to clinical adoption—including signal reliability, latency bottlenecks, and ethical uncertainties—and outlines research pathways essential for realizing closed-loop, intelligent neurosurgical platforms.

1. Introduction

The intersection of neuroscience, artificial intelligence (AI), and clinical engineering has led to transformative innovations in surgical precision. As the complexity of neurosurgical procedures increases, there is a pressing need for intelligent, adaptive technologies that can process neural and imaging data in real time. Brain–computer interfaces (BCIs) and AI-driven image segmentation stand at the forefront of this shift, offering unprecedented possibilities in real-time decision support, neurorehabilitation, and intraoperative assistance.

1.1. Background and Motivation

BCIs have emerged as a transformative technology, enabling direct communication between the brain and external devices without reliance on traditional neuromuscular pathways [1]. Initially conceptualized by Vidal as a means of direct brain–computer communication [2], BCIs have since evolved into a sophisticated interdisciplinary domain encompassing neuroscience, AI, and bioengineering [3]. The fundamental goal of BCI systems is to translate neural activity into actionable commands, thus facilitating applications in healthcare, neurorehabilitation, and human–computer interaction.
BCI technology has witnessed significant advancements, particularly in signal acquisition, processing, and real-time decoding of cognitive states. One of the earliest practical implementations utilized steady-state visual evoked potentials (SSVEP) to achieve high ITR in real-world settings [4]. These developments highlight the potential of BCIs in not only assisting individuals with neuromuscular impairments but also enhancing cognitive augmentation in healthy users.
Despite these advancements, challenges persist in the commercialization and widespread adoption of BCIs. Ethical concerns regarding user privacy, data security, and the potential for cognitive manipulation remain critical barriers [3,5,6]. Additionally, variability in user adaptability and signal reliability necessitates further research into robust machine learning algorithms for improved accuracy and usability.
Given these considerations, this review explores the intersection of BCIs and AI-driven image segmentation for precision neurosurgery. By integrating state-of-the-art DL techniques with neural decoding methodologies, the aim is to enhance surgical precision, optimize real-time decision-making, and mitigate intraoperative risks [7]. This synergy represents a crucial step toward the future of intelligent neuro-interventions, where BCIs play a pivotal role in augmenting surgical outcomes and patient safety.

1.2. Advances in AI-Driven Medical Image Segmentation

Medical image segmentation has witnessed remarkable advancements with the integration of AI, particularly through DL techniques. These AI-driven methods have significantly enhanced the accuracy, efficiency, and automation of analyzing complex medical images. This section explores recent developments in AI-based segmentation across various imaging modalities, including electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), magnetoencephalography (MEG), electromyography (EMG), computed tomography (CT), electrocorticography (ECoG), magnetic resonance imaging (MRI), functional-MRI (fMRI), fluorescence-guided surgery (FGS), diffusion tensor imaging (DTI), and high-density electrode arrays.
Convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants have become the cornerstone of medical image segmentation. Notable architectures include the following:
  • U-Net and Variants: Initially introduced for biomedical image segmentation, U-Net has been widely adopted due to its encoder–decoder structure, which effectively captures spatial and contextual information. Variants like Attention U-Net and 3D U-Net have further improved segmentation accuracy for volumetric imaging [8].
  • Transformers in Segmentation: Vision transformers (ViTs) and Swin transformers have recently demonstrated superior performance in segmenting medical images by leveraging self-attention mechanisms to capture long-range dependencies [9].
  • Generative Adversarial Networks (GANs): GAN-based segmentation models enhance the precision of medical image delineation by generating realistic synthetic data and refining segmentation boundaries [10].
AI-driven segmentation methods have been applied to various imaging techniques, each posing unique challenges and requiring specialized approaches:
  • EEG and MEG: While traditionally used for functional brain mapping, AI-assisted segmentation techniques now improve spatial resolution by segmenting source-localized brain activity. DL enhances artifact removal and signal interpretation [11].
  • fNIRS: AI models segment hemodynamic responses from fNIRS data, distinguishing oxygenated and deoxygenated hemoglobin concentrations to map cortical activity with higher precision.
  • EMG: AI-driven segmentation aids in the precise identification of muscle activity patterns, improving applications in neuromuscular disorder diagnosis and prosthetic control.
  • CT and MRI: CNNs and transformers play a crucial role in segmenting anatomical structures, tumors, and lesions from CT and MRI scans. Multi-modal approaches integrating PET, CT, and MRI enhance diagnostic accuracy [12,13].
  • ECoG and High-Density Arrays: AI models segment cortical activity recorded from ECoG and high-density electrode arrays, enabling more refined brain mapping for epilepsy monitoring and BCI applications [14].

1.3. Challenges and Future Directions

Despite the rapid progress, AI-driven medical image segmentation faces several challenges including data scarcity and quality, model interoperability, and computational complexity. The limited availability of annotated medical datasets hinders model generalization. Data augmentation and self-supervised learning offer potential solutions. Deep generative models have been proposed to generate realistic and diverse data that conform to the true distribution of medical images, addressing the scarcity of annotated datasets [15,16]. Additionally, self-supervised learning approaches have been explored to improve model performance in few-shot medical image segmentation scenarios [17]. The black-box nature of DL models raises concerns in clinical applications. However, explainable AI (XAI) methods are being explored to improve transparency. Recent studies have focused on developing interpretable DL models in medical image analysis to enhance trust and facilitate clinical adoption [18]. Moreover, human-centered design guidelines have been proposed to create explainable medical imaging AI systems that align with user needs [19]. Advanced deep learning architectures require substantial computational resources, highlighting the need for optimization strategies and efficient hardware implementation. Although traditional segmentation methods offer computational efficiency and interpretability, their performance often deteriorates in the presence of complex, noisy, or highly variable medical imaging data [20]. Hybrid approaches that integrate traditional techniques with deep learning aim to strike a balance between computational efficiency and segmentation accuracy. Future research should prioritize enhancing generalizability across diverse datasets, incorporating multi-modal imaging modalities, and developing robust, clinically interpretable AI models to advance precision neurosurgery and other medical applications.

1.4. Importance of Precision Neurosurgery

Precision neurosurgery represents a paradigm shift in the field of neurosurgical interventions, leveraging advanced imaging techniques, AI, and BCI to enhance surgical accuracy, minimize risks, and improve patient outcomes. The evolution of precision neurosurgery has been driven by the need for targeted, minimally invasive procedures that preserve neurological function while effectively treating complex conditions such as brain tumors, epilepsy, Parkinson’s disease, and neurovascular disorders [21].

1.5. Methodology and Literature Selection

The methodology for selecting the literature reviewed in this article was designed to ensure the inclusion of peer-reviewed, high-impact contributions addressing both BCI and AI-driven image segmentation within neurosurgical contexts. A systematic narrative approach was adopted, spanning publications from January 2013 to December 2024. Databases including PubMed, IEEE Xplore, Scopus, and Web of Science were queried using combinations of the keywords: “Brain-Computer Interface”, “BCI”, “AI in Neurosurgery”, “Medical Image Segmentation”, “EEG”, “Deep Learning”, “Hybrid BCI”, “Neuroimaging” and “Precision Neurosurgery”.
Priority was given to works that (i) presented novel methodologies, architectures, or hybrid systems involving BCI and AI; (ii) provided quantitative performance metrics such as DSC, ITR, latency, or accuracy; and (iii) contributed clinically actionable insights or translational frameworks relevant to neurosurgical workflows. Only articles published in English were considered.
In total, over 300 records were initially identified. After removing duplicates and screening abstracts, 129 articles were shortlisted for full-text assessment. Following a quality and relevance review, 89 primary sources were ultimately incorporated into the core synthesis. Additional sources, including recent preprints and technical documentation, were selectively included to reflect emerging trends and state-of-the-art innovations.
This methodological rigor ensures that the resulting review is both representative of current knowledge and sufficiently critical to inform future directions in intelligent neurosurgical systems.

1.6. Contributions of This Review

This review makes four distinct contributions to the emerging confluence of BCI technologies and AI-driven medical image segmentation in neurosurgical contexts. First, it offers a critical synthesis of segmentation architectures, comparing the clinical performance, computational efficiency, and implementation challenges of convolutional neural networks, transformer-based models, and hybrid architectures such as Swin UNETR and TransUNet. Second, it surveys core BCI paradigms—including motor imagery, SSVEP, and P300—with emphasis on signal acquisition techniques (EEG, fNIRS, ECoG), neural decoding algorithms, and application domains ranging from intraoperative monitoring to postoperative neurorehabilitation. Third, the paper proposes a hybrid integration model wherein real-time BCI inputs are harmonized with deep learning-based segmentation outputs, enabling closed-loop decision support systems with adaptive feedback control for enhanced surgical precision. Finally, the review identifies key translational bottlenecks including poor generalizability, data scarcity, latency limitations, and ethical concerns around autonomy and privacy, while outlining research pathways toward clinically validated, regulatory-compliant, and ethically robust neurotechnological systems. Collectively, these contributions not only consolidate current advancements but also chart a trajectory for future interdisciplinary innovation in intelligent neurosurgical interventions. This review uniquely focuses on the synergistic integration of BCI-driven cognitive decoding with AI-based medical image segmentation in neurosurgical workflows. It critically examines signal synchronization, modular alignment, and decision-feedback loops to demonstrate the clinical and architectural feasibility of closed-loop systems.

1.7. Structure of the Paper

The paper is structured in accordance with the IMRaD format to methodically present the rationale, components, and implications of integrating brain–computer interface (BCI) technologies with AI-driven image segmentation for precision neurosurgery. Section 1 introduces the context, motivation, and literature landscape, including the methodology adopted for study selection. Section 2 consolidates the core methodological foundations, encompassing five interlinked domains: neuroimaging modalities used in surgical planning and intraoperative guidance; BCI system architectures and signal acquisition pipelines; deep learning–based segmentation frameworks; hybrid integration models combining BCI and image analytics; and quantitative evaluation protocols including performance metrics and statistical validation. Section 3 synthesizes key findings and interprets them in relation to clinical translation, highlighting unresolved challenges, ethical considerations, and practical barriers to real-world deployment. Section 4 concludes the review by outlining unresolved research gaps and proposing future directions for clinically scalable, intelligent neurosurgical systems. In line with contemporary standards for systematic reviews, this paper adheres to the PRISMA 2020 framework for study selection transparency.
The methodological rigor of the review process—including search strategy, screening, and eligibility determination—is visually summarized in the PRISMA 2020 Flow Diagram presented below. Rather than applying conventional clinical bias-assessment frameworks (e.g., ROBINS-I, Cochrane RoB), which are designed for therapeutic trials, we evaluated methodological rigor through reproducibility markers—such as open dataset usage (e.g., BraTS, TCGA, OpenNeuro), availability of implementation details, and cross-validated performance metrics. Meta-analysis was not feasible due to high heterogeneity in imaging pipelines, neural decoding strategies, and evaluation criteria. GRADE-style certainty ratings were similarly inapplicable, given the engineering-oriented nature of the included studies. A tabulated exclusions list was not maintained, as screening was conducted iteratively using eligibility flags embedded in a reference management system. Records excluded during full-text review primarily failed automated filters for modality mismatch, absence of quantitative metrics, or lack of neurosurgical relevance, and were not archived as standalone entries. This systematic review was preregistered on the OSF and is publicly accessible at https://doi.org/10.17605/OSF.IO/J59CY.
As shown in Figure 1, the identification process began with a comprehensive screening of published literature across indexed databases and preprints. Following duplicate removal and abstract screening, full-text eligibility checks were performed. The diagram illustrates each stage of exclusion, including automated filtering, manual pruning, and inclusion decisions, resulting in 184 studies retained for detailed synthesis. This diagram captures the refinement of included studies from initial database identification to final eligibility after deduplication and full-text appraisal. Total records (n = 1022) were identified across PubMed, IEEE Xplore, Scopus, and Web of Science. After removing duplicates (n = 322), 700 records were screened at abstract level. Of these, 280 full-text articles were sought for retrieval; 13 could not be retrieved due to access restrictions. The remaining 267 full texts were assessed for eligibility, and 83 were excluded based on methodological irrelevance, domain mismatch, or missing quantitative metrics. A total of 184 studies were included in the final review. It may be noted that these numbers include estimates for unretrievable and excluded full texts derived from post hoc reconciliation of the reference database and documented screening logs.

2. Methods

This section establishes the methodological foundation of the systematic review by structuring the evidence around five core axes of technological and clinical relevance. First, we explore advanced neuroimaging modalities employed in neurosurgical contexts, focusing on their spatial and functional resolution characteristics. This is followed by a detailed examination BCI architectures, signal acquisition pipelines, and control mechanisms, with an emphasis on clinically viable paradigms such as motor imagery, P300, and SSVEPs. The third component of the review analyzes the application of AI-driven image segmentation algorithms, including CNNs and their mathematical formulations relevant to brain image analysis. The fourth segment introduces integrative frameworks that combine BCI systems with automated image segmentation modules, offering a systems-level perspective on hybrid architectures designed for precision neurosurgery. Finally, we assess the performance evaluation techniques used across studies, including benchmarking metrics for classification accuracy, segmentation fidelity, and statistical significance testing. Throughout this section, peer-reviewed literature was selectively curated based on relevance to BCI-AI integration in neurosurgical applications, clinical feasibility, and methodological transparency.

2.1. Advanced Neuroimaging Modalities for Precision Neurosurgery

Advances in medical imaging have profoundly impacted the planning, execution, and evaluation of neurosurgical procedures. This section surveys key neuroimaging modalities that enable high-resolution anatomical and functional visualization, facilitating precise targeting and minimizing surgical risks. The techniques discussed include structural imaging tools such as MRI and CT, as well as functional methods like fMRI and PET, and optical modalities such as fNIRS.

2.1.1. Role of Neuroimaging in Precision Neurosurgery

State-of-the-art neuroimaging modalities provide critical preoperative and intraoperative insights, allowing for accurate localization of pathological regions and functional mapping of the brain. These imaging techniques include the following:
  • MRI and CT: MRI and CT scans serve as foundational tools for visualizing anatomical structures, aiding in tumor resection, and identifying vascular abnormalities [22].
  • fMRI and DTI: These modalities provide functional and connectivity-based insights, crucial for preserving eloquent brain regions during surgery [23,24].
  • ECoG and MEG: These electrophysiological imaging techniques assist in preoperative planning by identifying seizure foci and functionally significant cortical areas [25,26,27].
  • FGS: The use of fluorescence agents such as 5-ALA enhances real-time intraoperative tumor visualization, thereby improving the accuracy of surgical resection [28].
AI-powered tools and robotic-assisted systems have revolutionized neurosurgical precision by augmenting clinical decision-making, minimizing human error, and enhancing surgical dexterity. Machine learning models analyze multimodal neuroimaging data to predict optimal surgical pathways and assess risks for optimal AI-driven surgical planning [29]. Robotic systems, such as the ROSA Brain and NeuroMate, enhance stereotactic procedures by providing millimeter-level precision in electrode placement and tumor excision [30]. Also, AI-driven segmentation models, integrated with augmented reality (AR) and virtual reality (VR), facilitate enhanced intraoperative visualization [31]. BCIs are playing an increasingly vital role in precision neurosurgery by offering real-time feedback on neural activity and enabling direct brain–machine interactions. BCIs assist in real-time electrophysiological monitoring, ensuring critical functional regions are preserved during resection [32]. Whereas implantable BCIs, such as Utah arrays and intracortical microelectrodes, restore motor function in patients with spinal cord injuries or stroke [33].
While each of the above modalities contributes uniquely to surgical planning and execution, a comparative understanding of their capabilities is essential for selecting the most appropriate imaging workflow. Table 1 outlines key performance characteristics such as spatial resolution, temporal sensitivity, invasiveness, and clinical application domain of the most widely adopted modalities in precision neurosurgery.
The classification of neuroimaging techniques based on invasiveness often hinges not only on anatomical intrusion but also on physiological perturbation induced during the procedure. Modalities involving the administration of intravenous contrast agents, especially those with radiopharmaceutical or metabolic impact, occupy a nuanced position between non-invasive and fully invasive methods. In this context, the term semi-invasive is applied to procedures that, while not involving direct tissue penetration, entail systemic administration of agents that interact with biological processes beyond mere perfusion or imaging enhancement. This distinction becomes particularly relevant in modalities where the administered agents are radioactive, metabolically active, or designed to modify neural signal characteristics. Conversely, not all imaging procedures involving intravenous contrast warrant a semi-invasive classification. Techniques employing inert contrast for vascular visualization—such as standard CT or MRI with gadolinium-based agents—are often described as minimally invasive or even non-invasive in certain regulatory frameworks, provided they do not result in systemic biological alterations. Thus, the terminology reflects a spectrum rather than a binary, influenced by the route of administration, pharmacological profile, radiological impact, and procedural context. Clear delineation of these categories is essential for risk communication, regulatory compliance, and clinical decision-making.
Despite remarkable advancements, precision neurosurgery faces several challenges. Differences in brain anatomy and pathology demand highly personalized surgical approaches [34]. Also, the integration of AI and BCIs raises concerns regarding patient privacy, informed consent, and surgical liability [35]. The high cost of AI-driven surgical technologies limits widespread adoption, particularly in resource-constrained settings [36]. Precision neurosurgery, driven by advancements in neuroimaging, AI, robotics, and BCIs, is redefining the landscape of neurosurgical interventions. While significant challenges remain, ongoing research and technological innovations continue to enhance surgical accuracy, patient safety, and postoperative outcomes.

2.1.2. Clinical Relevance in Neurosurgical Practice

In neurosurgical practice, the integration of AI into medical imaging has significantly improved the precision and efficiency of clinical interventions. Neurosurgeons frequently rely on complex imaging modalities such as MRI, CT-scan, and functional imaging techniques like fMRI and DTI for preoperative planning and intraoperative guidance [37]. However, manual segmentation of these images is time-consuming and prone to inter-operator variability, which can impact surgical decision-making and patient outcomes [38]. AI-based segmentation models, particularly DL-driven approaches, have demonstrated superior accuracy in delineating pathological and functional regions while minimizing human intervention [39].
Applications of AI in Neurosurgery:
  • Brain Tumor Resection: AI-enhanced segmentation assists in accurately distinguishing tumor margins from healthy tissue, thereby reducing the risk of postoperative neurological deficits [40]. Studies have demonstrated that DL models such as CNNs and transformers outperform traditional segmentation methods in identifying tumor boundaries, leading to improved surgical planning [41].
Figure 2 demonstrates how deep learning models, such as CNNs with attention mechanisms, process different MRI sequences to achieve precise tumor boundary delineation. The Z-score normalization technique is used to enhance contrast and improve segmentation accuracy, as shown in the comparison across multiple MRI channels.
  • Deep Brain Stimulation (DBS) Planning: Accurate segmentation of subcortical structures is crucial for optimal electrode placement in DBS procedures used to treat movement disorders such as Parkinson’s disease [43]. AI-based volumetric segmentation has been shown to enhance the precision of target selection in DBS, thereby improving therapeutic outcomes [44].
Figure 3 depicts directional stimulation mapping using the distal row of segmented electrodes in anterior (A–A″), lateral (B–B″), and posterior (C–C″) orientations. The figure illustrates the current thresholds (in mA) required to elicit transient or sustained sensory side effects in the face (via the ventral posteromedial nucleus, VPM), hand (via the ventral posterolateral nucleus, VPL), and speech-related functions (via the internal capsule), alongside the stimulation levels needed for effective tremor reduction. Software-based modeling visualizes the anatomical distribution of facial (A–C), hand (A′–C′), and capsular (A″–C″) responses.
  • Epilepsy Surgery: AI-based identification of seizure foci enhances the precision of both resective and neuromodulatory treatments for epilepsy [46]. Machine learning algorithms, particularly support vector machines (SVMs) and recurrent neural networks (RNNs), have been employed to analyze intracranial EEG (iEEG) signals and detect epileptogenic zones with high accuracy [47].
Figure 4 presents segmentation outputs from multiple deep learning models, overlaid on postoperative T1-weighted MPRAGE scans for different types of epilepsy surgeries. The resection types include the following: (1) right anterior temporal lobectomy, (2) right temporal polectomy with encephalocoele disconnection, (3) left frontal corticectomy, and (4) left frontal lesionectomy. DSCs are provided for each model, demonstrating the accuracy of segmentation across various resection types. Green-highlighted regions indicate high segmentation accuracy, while lower DSC scores reflect areas where automated methods struggle with precision. This visualization underscores the potential of AI-assisted techniques in enhancing post-surgical evaluation and guiding future interventions.
Challenges and Considerations:
Despite these advancements, several challenges must be addressed before AI can be fully integrated into neurosurgical workflows:
  • Interpretability: The “black box” nature of many AI models remains a significant barrier to clinical adoption. To improve transparency, XAI approaches such as attention mechanisms and saliency maps are being explored to provide visual interpretability of AI-generated segmentations [49]. These techniques enhance clinician trust and facilitate regulatory approval [50].
  • Regulatory Approvals: AI-driven medical imaging tools require rigorous validation and approval from regulatory bodies such as the U.S. Food and Drug Administration (USFDA) and the European Conformité Européenne (ECE) certification before they can be deployed in clinical settings [51]. Regulatory frameworks are continually evolving to address concerns related to data privacy, bias, and reliability.
  • Intraoperative Validation: Real-time validation of AI-generated segmentations during surgery remains a challenge. AI must seamlessly integrate with intraoperative imaging systems, such as neuronavigation platforms, to ensure reliable guidance during neurosurgical procedures [52,53]. Additionally, AR and AI-assisted robotics are emerging as potential solutions for improving intraoperative accuracy [54].
The collaboration between medical practitioners and AI researchers is essential for refining these technologies and ensuring their practical deployment in neurosurgical practice. By bridging the gap between computational advancements and real-world clinical applications, AI-driven segmentation and BCI-based neurosurgical interventions have the potential to significantly improve patient outcomes [55].

2.2. Brain–Computer Interfaces: Principles and Applications

BCIs enable direct communication pathways between neural activity and external devices, bypassing conventional motor output channels. Their integration in neurosurgical workflows offers real-time insights into cortical function, enabling both intraoperative monitoring and postoperative rehabilitation. This section outlines the foundational principles of BCI systems and examines their application across various acquisition paradigms, including EEG, fNIRS, and ECoG.

2.2.1. Fundamentals of BCIs

BCIs are systems that enable direct communication between the brain and external devices by bypassing conventional neuromuscular pathways. BCIs operate by detecting, processing, and translating neural signals into commands that can control computers, prosthetic limbs, communication devices, and other assistive technologies [56]. The fundamental components of a BCI system include the following:
Signal Acquisition
Signal acquisition in BCIs relies on a range of neuroimaging and electrophysiological methods to accurately capture brain activity and translate it into actionable commands. EEG remains the most widely used non-invasive approach due to its high temporal resolution, portability, and affordability, making it suitable for real-time BCI applications [57]. However, EEG suffers from limited spatial resolution due to signal attenuation caused by scalp and skull interference. MEG overcomes some of these limitations by measuring the magnetic fields produced by neuronal activity, providing better spatial resolution than EEG, although its high cost and sensitivity to environmental noise restrict its practical implementation [58]. fNIRS, a non-invasive optical imaging technique, detects cerebral hemodynamic responses and has proven useful for monitoring cognitive states in BCI applications, particularly in scenarios where traditional electrophysiological methods are impractical [59]. For applications requiring higher spatial resolution, ECoG presents a viable semi-invasive alternative, wherein electrodes are placed directly on the cortical surface, providing a balance between high-resolution recordings and reduced signal attenuation compared to non-invasive approaches [60]. For the highest level of precision and direct neuronal activity capture, invasive methods such as single-unit and multi-unit recordings involve implanting microelectrodes to detect action potentials from individual neurons. These approaches offer unparalleled accuracy and control, making them ideal for high-performance BCIs but also pose significant surgical risks and biocompatibility challenges [61]. The continuous evolution of these signal acquisition technologies is essential for improving the efficiency and reliability of BCIs in clinical and assistive applications.
Signal Processing and Feature Extraction
Signal processing and feature extraction play a crucial role in BCIs by refining raw neural signals and isolating meaningful patterns from noise and redundant data. The initial stage, preprocessing, involves noise filtering, artifact removal, and baseline correction to enhance the clarity and reliability of neural signals. Common techniques include adaptive filtering, independent component analysis (ICA), and wavelet transforms, which help mitigate interference from muscle movements, eye blinks, and environmental electrical noise [62]. Each preprocessing and feature extraction technique comes with its own set of trade-offs. For example, ICA is highly effective in isolating ocular and muscle artifacts but assumes linear mixing and statistical independence, which may not always hold in complex BCI scenarios. Wavelet transforms provide excellent time-frequency resolution and are well-suited for non-stationary EEG signals, yet they can be computationally intensive and sensitive to the choice of the mother wavelet. In contrast, adaptive filtering is lightweight and effective for suppressing line noise but is limited in handling non-linear artifacts. On the feature learning front, CNNs excel at extracting spatial features and hierarchical patterns from EEG signals, especially in motor imagery and ERP tasks, but may require large datasets and struggle with temporal dependencies. RNNs, particularly LSTMs, are designed to capture sequential dynamics and perform well in time-sensitive decoding tasks, though they are prone to vanishing gradient issues and are less interpretable than traditional models. Recent studies have shown that hybrid architectures, such as wavelet-transformed EEG fed into CNN-RNN pipelines, often outperform single-model setups, offering both temporal sensitivity and robust classification accuracy.
Once the signals are cleaned, feature extraction methods identify key neural patterns such as event-related potentials (ERPs), spectral power changes, and phase synchronization, which serve as distinguishing characteristics for BCI applications. These features provide insight into cognitive states and motor intentions, enabling accurate interpretation of brain activity [63]. Finally, classification and machine learning algorithms transform extracted features into actionable commands. Advances in artificial intelligence, particularly DL models, have significantly improved the accuracy and adaptability of BCI systems by leveraging CNNs, RNNs, and hybrid architectures that dynamically adapt to individual users. These methods enhance the decoding of neural signals in real time, improving system performance across various applications, from neurorehabilitation to assistive technologies [64]. Typical BCI pipelines rely on the extraction of robust neural features to enable effective classification and control. These features include time-domain metrics such as amplitude variance, root mean square, and Hjorth parameters; frequency-domain characteristics such as power spectral density (PSD) across canonical EEG bands (delta, theta, alpha, beta, and gamma); and statistical descriptors including entropy, skewness, and kurtosis. In more advanced systems, connectivity-based measures like coherence and phase-locking value (PLV), as well as temporal markers such as event-related potentials (e.g., P300, MRCPs), are also employed. The selection of feature types is generally determined by the intended application (e.g., motor decoding, attention monitoring), latency constraints, and compatibility with downstream classification algorithms. The integration of robust signal processing pipelines with advanced AI models continues to drive the evolution of BCIs, enhancing their usability, reliability, and responsiveness.
Control and Feedback Mechanisms
BCI systems rely on decoded neural signals to control external devices, creating a closed-loop interaction between the user and the system. This control loop consists of three key components: translation algorithms, output devices, and feedback mechanisms.
The first step in this process is the application of translation algorithms, which map extracted neural features into specific commands. These algorithms employ machine learning techniques to classify neural patterns and associate them with intended actions. Various classification methods, such as linear classifiers, SVMs, and DL approaches, are used to optimize translation accuracy and minimize false activations [65]. Once the neural intent is decoded, output devices execute the corresponding commands. These devices range from robotic arms and exoskeletons to VR environments and communication interfaces. For instance, AR-integrated BCIs enable users to interact with virtual objects through brain signals, paving the way for enhanced human–computer interaction [66,67]. Similarly, P300-based spellers have been developed to facilitate communication for individuals with severe motor disabilities by allowing them to select letters using brain activity alone [68]. Other implementations include drone control via BCI systems, demonstrating the adaptability of neural signal translation across multiple domains.
A crucial aspect of BCIs is the incorporation of feedback mechanisms, which provide real-time sensory feedback to users. This feedback enhances system accuracy and adaptability by leveraging neuroplasticity-driven learning. In EEG-based BCIs, visual, auditory, or haptic feedback is commonly used to guide users in adjusting their neural activity for improved control [69]. For example, when users receive real-time feedback on their brain signal modulation, they can refine their cognitive strategies to enhance performance. This iterative learning process is particularly valuable in neurorehabilitation applications, where BCI systems aid motor recovery by reinforcing brain–muscle coordination [70].

2.2.2. BCI Paradigms

BCIs operate through various paradigms that define how users generate neural signals to control external devices. These paradigms leverage distinct neural responses, enabling BCI applications in communication, rehabilitation, and assistive technologies.
Motor Imagery (MI)
MI-based BCIs rely on the user’s ability to imagine specific movements, such as hand or foot movements, without executing them physically. This mental rehearsal induces characteristic changes in brain activity, particularly sensorimotor rhythm desynchronization in the primary motor and somatosensory cortices [71]. When a user imagines moving their right hand, for instance, neural activity in the corresponding sensorimotor area exhibits a reduction in oscillatory power, known as event-related desynchronization (ERD), while contralateral areas may show an increase in oscillatory activity, termed event-related synchronization (ERS). These distinct patterns allow BCIs to classify MI tasks and translate them into control commands for prosthetic limbs, wheelchairs, or virtual avatars. Advances in wearable EEG technology have further enhanced MI-based BCIs, improving their usability and accessibility in real-world applications [72].
P300 Event-Related Potential (ERP)
The P300-based BCI paradigm capitalizes on the brain’s involuntary response to salient stimuli. When a user perceives a rare or meaningful stimulus within a series of non-target stimuli, a positive deflection in EEG activity occurs approximately 300 milliseconds after stimulus onset. This response, known as the P300 component, is widely used in speller interfaces, where users focus on desired letters while the system detects P300 responses to identify their intended selections [73]. P300-based BCIs are particularly beneficial for individuals with severe motor impairments, such as amyotrophic lateral sclerosis (ALS), providing them with an alternative means of communication through brain activity alone.
Steady-State Visual Evoked Potentials (SSVEPs)
SSVEP-based BCIs exploit the brain’s response to periodic visual stimulation. When a user fixates on a flickering light source with a specific frequency, their occipital lobe generates EEG signals that oscillate at the same frequency or its harmonics. This frequency-locked neural response allows for highly efficient BCI control, as the system can rapidly identify the frequency the user is attending to and infer their intended command [74]. Due to their high signal-to-noise ratio and minimal training requirements, SSVEP-based BCIs are widely used for high-speed communication, gaming, and environmental control systems. Each BCI paradigm offers unique advantages and is suited for different applications, from assistive communication devices to neurorehabilitation and augmented reality control. The selection of a specific paradigm depends on factors such as signal reliability, user comfort, and system robustness in real-world environments.

2.2.3. Neuroimaging Modalities for BCI

BCIs rely on diverse neuroimaging techniques to capture neural activity and translate it into actionable commands. These modalities vary in invasiveness, spatial and temporal resolution, signal type, and application scope [75,76]. This section outlines electrophysiological, hemodynamic, metabolic, and hybrid neuroimaging techniques used in BCI systems.
Electrophysiological Modalities
Electrophysiological modalities measure electrical brain activity with high temporal precision, making them essential for real-time BCI applications. These techniques facilitate rapid detection of neural signals, enabling efficient communication and control systems.
EEG
EEG is one of the most widely used electrophysiological modalities in BCI research due to its non-invasiveness, affordability, and high temporal resolution. It operates by recording voltage fluctuations on the scalp that result from synchronized neuronal activity, enabling real-time neural signal acquisition. EEG offers a temporal resolution in the millisecond range, making it highly suitable for applications requiring rapid signal processing, such as MI-based BCIs, P300 spellers, and SSVEP-driven systems [77]. Despite these advantages, EEG suffers from certain limitations, particularly its poor spatial resolution, which ranges between 10 and 50 mm, making precise source localization challenging. Moreover, EEG signals are highly susceptible to noise and artifacts caused by muscle activity, eye movements, and external electrical interference, necessitating advanced denoising and artifact rejection techniques. In the context of BCI applications, EEG-based MI systems leverage neural desynchronization patterns to facilitate control of prosthetic devices and communication interfaces [78], while P300-based BCIs exploit event-related potentials elicited in response to target stimuli, commonly used in speller systems [79]. Additionally, SSVEP-based BCIs rely on periodic visual stimulation to generate robust and high-speed control commands. Despite these challenges, EEG remains a fundamental tool in BCI research, with ongoing efforts focused on improving signal acquisition methods, refining feature extraction algorithms, and integrating multimodal approaches to enhance its reliability and usability in both clinical and non-clinical settings.
ECoG
ECoG is an invasive electrophysiological technique that records electrical activity directly from the cortical surface using subdural electrode grids. Unlike EEG, which measures neural activity through the scalp, ECoG electrodes are implanted beneath the dura mater, providing significantly improved spatial resolution, often reaching approximately 1 mm [80]. This enhanced spatial precision allows for more accurate localization of neural signals, leading to higher signal stability and reduced susceptibility to artifacts caused by muscle movement or environmental interference. Due to these advantages, ECoG has been extensively explored for high-performance neuroprosthetic control, enabling individuals with motor impairments to execute precise motor commands through direct cortical signal processing. Furthermore, its application in speech BCIs has shown promising results in assisting individuals with locked-in syndrome by decoding cortical activity associated with speech production and translating it into real-time communication outputs [79]. Despite these benefits, the necessity for surgical implantation presents a significant drawback, as it inherently carries risks of infection, inflammation, and long-term biocompatibility concerns. Nevertheless, ongoing research aims to refine ECoG-based BCIs by developing minimally invasive electrode implantation techniques and optimizing signal processing algorithms to enhance both safety and efficacy in clinical and assistive applications.
LFPs
LFPs are electrophysiological signals that measure synaptic activity from neuronal populations using intracortical electrodes. Unlike EEG, which records electrical activity from the scalp, or ECoG, which captures signals from the cortical surface, LFPs are obtained directly from within the brain, allowing for the detection of lower-frequency oscillations generated by local neuronal assemblies [80]. Due to their proximity to neural sources, LFPs offer significantly higher signal fidelity and decoding accuracy compared to non-invasive and subdural methods, making them highly suitable for applications requiring fine-grained neural control. This high-quality signal acquisition has led to their extensive use in neuroprosthetic limb control, where LFP-based BCIs enable individuals with motor impairments to achieve precise movement execution through direct neural interfacing. Additionally, LFPs play a crucial role in adaptive neurorehabilitation, as their detailed capture of neural dynamics allows for real-time feedback systems that can enhance neuroplasticity and promote motor recovery in patients with neurological disorders [79]. However, despite these advantages, the invasive nature of LFP acquisition necessitates chronic electrode implantation, which carries inherent surgical risks, including infection and long-term stability issues. Current research focuses on refining electrode materials and implantation techniques to improve biocompatibility, reduce immune responses, and extend the longevity of LFP-based interfaces for both clinical and assistive applications.
Single-Unit and Multi-Unit Recordings
Single-unit and multi-unit recordings involve the capture of action potentials from individual neurons using microelectrodes, offering the highest spatial resolution among all electrophysiological modalities (~10 µm). By directly measuring the electrical activity of neurons, this technique provides precise neural decoding, enabling detailed analysis of neural circuit dynamics [81]. Due to their exceptional accuracy, single-unit and multi-unit recordings are particularly valuable in high-precision robotic arm control, allowing individuals with severe motor impairments to perform complex, fine-motor tasks with neuroprosthetic devices [82,83]. Additionally, these recordings play a crucial role in sensory feedback integration, where neural signals are used to restore tactile perception in brain-controlled prosthetic limbs. This advancement significantly enhances the user’s ability to interact with their environment by providing real-time sensory input alongside motor control [84]. However, despite their advantages, single-unit and multi-unit recordings remain highly invasive, requiring intracortical microelectrode implantation. This introduces challenges such as long-term stability issues, neuronal loss around electrode sites, and immune responses that can degrade signal quality over time. Ongoing research aims to develop biocompatible materials and advanced neural interface technologies to extend the longevity and reliability of these systems, making them more viable for long-term clinical applications.
Hemodynamic and Metabolic Modalities
Hemodynamic and metabolic modalities are neuroimaging techniques that assess neural activity by measuring changes in blood flow, oxygenation, or metabolic processes within the brain. These methods are based on the principle that active brain regions require increased oxygen and energy, leading to localized vascular and metabolic changes.
fNIRS
fNIRS is a non-invasive neuroimaging technique that measures changes in oxygenated and deoxygenated hemoglobin using infrared light, providing insights into cortical activity. The technique relies on the absorption properties of near-infrared light to detect hemodynamic responses associated with neural activation, making it particularly valuable for studying cognitive and affective states [85]. One of the primary advantages of fNIRS is its portability, allowing for real-world applications such as cognitive workload monitoring and mental state classification. Additionally, its spatial resolution (approximately 1–3 cm) surpasses that of EEG, making it a viable alternative for certain BCI applications [86]. However, fNIRS has inherent limitations, including low temporal resolution (~1–2 s) and susceptibility to artifacts caused by scalp and skull thickness, which can affect signal reliability. Recent advancements have integrated fNIRS with EEG to create hybrid systems that enhance classification accuracy and robustness in mental state detection. This multimodal approach leverages the complementary strengths of both techniques combining fNIRS’s hemodynamic insights with EEG’s high temporal resolution resulting in improved BCI performance for applications such as neurorehabilitation and cognitive workload assessment.
fMRI
fMRI is a powerful neuroimaging technique that measures BOLD signals to infer underlying neuronal activity with high spatial resolution. This technique capitalizes on changes in cerebral blood flow and oxygenation, offering unparalleled spatial resolution (~1 mm) and whole-brain coverage [87]. Due to its ability to capture deep cortical and subcortical structures, fMRI has become a critical tool for studying cognitive functions, neural disorders, and BCI applications.
One of the most promising applications of fMRI-BCI is neurofeedback training, where individuals learn to modulate their own brain activity through real-time feedback. This approach has been explored for rehabilitative purposes, including restoring brain function in substance use disorders and psychiatric conditions. Additionally, fMRI-based brain-state decoding has shown potential in psychiatric and neurological applications, enabling the identification of altered brain states in conditions such as disorders of consciousness and schizophrenia [88]. However, fMRI’s poor temporal resolution (~2–6 s), high costs, and requirement for participant immobility limit its practical use in real-time BCI systems.
Despite these challenges, ongoing research aims to refine fMRI-BCI paradigms by integrating machine learning techniques for more accurate and rapid brain-state classification. The combination of fMRI with other modalities, such as EEG or fNIRS, may further enhance its applicability in real-world neurotechnology.
MEG
MEG is an advanced neuroimaging modality that detects the weak magnetic fields generated by neuronal electrical activity. Unlike EEG, which measures voltage fluctuations on the scalp, MEG captures direct neuronal activity with high temporal resolution (~1 ms) and improved spatial localization (~3–5 mm) due to the minimal distortion of magnetic fields by the skull and scalp [89].
One of the primary applications of MEG in BCIs is MI-based control systems. MEG-BCIs leverage motor-related cortical activity to facilitate hands-free interaction with external devices, showing promise in neurorehabilitation and assistive technology. Additionally, real-time neurofeedback applications using MEG enable users to modulate their brain activity for cognitive and clinical interventions, including attention training and psychiatric therapy [90].
However, the widespread adoption of MEG-BCIs is constrained by its high cost, the necessity of a magnetically shielded environment, and the limited portability of current MEG systems. Advances in optically pumped magnetometers (OPMs) are being explored to overcome these challenges, potentially enabling wearable MEG technology that preserves its high spatial and temporal resolution. Integrating MEG with EEG or fNIRS may further enhance the robustness and accessibility of BCI applications.
Emerging and Hybrid Modalities
To improve accuracy and usability, hybrid BCI systems integrate multiple neuroimaging modalities, leveraging the strengths of each while mitigating their respective limitations. By combining different signal acquisition techniques, these systems provide a more comprehensive understanding of neural activity, enhancing both spatial and temporal resolution.
EEG-fNIRS Hybrid Systems
EEG-fNIRS hybrid systems integrate the high temporal resolution of EEG with the improved spatial resolution of fNIRS. EEG detects rapid neural oscillations, while fNIRS provides information on hemodynamic responses associated with neuronal activity. The complementary nature of these modalities makes EEG-fNIRS particularly useful for applications requiring both fine-grained temporal insights and spatially localized brain activity measurements [91]. EEG-fNIRS multi-modality has been employed for real-time emotion classification by capturing both electrophysiological and hemodynamic changes linked to affective states. This has significant implications for effective computing and assistive technologies. The combination of EEG’s rapid detection of cognitive fluctuations and fNIRS’s ability to measure sustained metabolic responses enhances mental workload monitoring. This is particularly useful in fields such as human–computer interaction, aviation, and neuroergonomics [92].
EEG-fMRI Hybrid Systems
EEG-fMRI hybrid systems merge the high spatial precision of fMRI with the millisecond-level temporal resolution of EEG, allowing researchers to simultaneously track both fast neural dynamics and precise anatomical localization of brain activity. This synergy is particularly beneficial for investigating brain-state transitions and neurological disorders [93]. EEG-fMRI is widely used in studying consciousness, epilepsy, schizophrenia, and other neuropsychiatric conditions. The integration of neural oscillations from EEG with fMRI’s whole-brain activity maps helps in identifying abnormal brain networks underlying these disorders [94].
Invasive Hybrid BCIs
Invasive BCIs utilize electrode arrays implanted directly on or within the brain tissue, offering unparalleled signal fidelity and precision. Hybrid configurations combine different invasive recording techniques to optimize motor control and sensory feedback. ECoG and LFP recordings enhance BCI performance by capturing both cortical surface activity and deeper subcortical signals. This dual-layer approach has shown promise in motor rehabilitation and assistive device control [95]. Hybrid single-unit recordings and ECoG enable high-precision decoding of fine motor movements and sensory feedback integration, improving prosthetic control and neurostimulation therapies. These systems are being explored for applications in spinal cord injury and locked-in syndrome patients [96,97,98].
The paradigms summarized in Table 2 reflect the spectrum of BCI systems currently applied or explored in neurosurgical contexts. While exogenous paradigms such as P300 and SSVEP offer high classification accuracy and minimal training overhead, their dependence on external stimuli can limit intraoperative flexibility. In contrast, endogenous paradigms like motor imagery enable more intuitive, self-driven control but require extensive training and are subject to inter-individual variability. Hybrid approaches, including EEG-fNIRS and EEG-fMRI systems, offer enhanced signal robustness and classification performance by combining complementary spatial and temporal characteristics, though at the cost of increased system complexity. Invasive modalities such as ECoG and LFP provide superior signal fidelity and resolution but remain constrained by ethical, surgical, and regulatory considerations. The clinical viability of each paradigm depends on factors such as intended control precision, procedural context, patient condition, and system latency. Consequently, integration strategies in hybrid BCI-AI systems must be tailored to match both the operational demands of neurosurgical workflows and the physiological realities of neural signal acquisition.

2.2.4. Latest Developments

Advancements in multimodal BCI integration, wireless brain implants, and AI-enhanced neuroimaging are expected to improve BCI performance, accessibility, and real-world applications [99,100,101]. The integration of hybrid mathematical models, self-supervised learning, and neuro-symbolic AI is expected to advance BCI performance and generalizability [102,103]. BCIs have emerged as a transformative technology in neurosurgery, offering real-time brain activity monitoring, neurofeedback for intraoperative precision, and assistive solutions for motor rehabilitation. This section examines the diverse applications of BCI in neurosurgical settings, focusing on preoperative planning, intraoperative monitoring, and postoperative rehabilitation. BCI-based neuroimaging techniques assist neurosurgeons in mapping functional areas of the brain, identifying eloquent cortical regions, and predicting surgical outcomes. AI algorithms facilitate precise delineation of brain tumors and critical anatomical structures, aiding in meticulous preoperative planning. For instance, fully automatic brain tumor segmentation techniques have been developed for 3D evaluation in augmented reality environments, enhancing the accuracy of surgical interventions [104]. AI-driven AR systems overlay critical information onto the surgeon’s field of view, improving real-time decision-making during procedures. The integration of AR technology in neurosurgery has been adopted early, amplifying user perception by integrating virtual content into the tangible world and displaying it simultaneously and in real time [105]. ECoG-guided functional mapping enables real-time localization of motor and language areas during neurosurgical procedures. In contrast, EEG-based source localization offers a non-invasive approach for mapping epileptic foci and assessing cortical function [106]. fMRI-BCI integration enhances pre-surgical brain mapping by combining hemodynamic and electrophysiological data. BCI assists in distinguishing between functional and pathological brain tissue, optimizing resection margins while preserving critical cortical areas [107]. Emerging systems link real-time EEG or ECoG decoding to segmentation adjustments during surgery, allowing modulation of resection zones based on intraoperative neural feedback.
BCI-driven intraoperative neuromonitoring (IONM) plays a crucial role in ensuring surgical precision and minimizing neurological deficits. BCI-controlled somatosensory evoked potentials (SSEPs) provide continuous feedback on spinal cord integrity [108]. Motor-evoked potentials (MEPs) track motor pathway function, aiding in tumor resection and deep brain stimulation (DBS) procedures [109]. Adaptive BCI-guided cortical stimulation modulates brain activity to prevent functional deterioration [110]. AI-enhanced BCI systems detect and correct intraoperative neural disturbances in real time [111]. BCI technology extends beyond surgery, enabling motor recovery, cognitive training, and neuro-prosthetic control for patients with neurological deficits. BCI-based neurorehabilitation utilizes motor imagery and real-time feedback to enhance neuroplasticity [112]. Exoskeletons and robotic prosthetics controlled by BCIs facilitate limb function recovery [113,114,115]. BCI-driven speech synthesis aids in communication for patients with severe speech impairment [116]. Neurofeedback therapy assists in cognitive recovery post-surgery. Future directions in BCI-guided neurosurgery include hybrid BCI models integrating fNIRS, EEG, and ECoG for enhanced neurosurgical guidance [117], alongside wireless and non-invasive BCIs for portable neuromonitoring.

2.3. AI-Driven Brain Image Segmentation: State-of-the-Art

AI-based segmentation methods have become central to modern neurosurgical workflows, offering automated delineation of brain structures with high spatial precision and computational efficiency. This section reviews the dominant segmentation architectures in current use, including CNNs, transformers, and hybrid models, with emphasis on performance metrics, neurosurgical applicability, and system limitations.

2.3.1. Machine Learning and Deep Learning in Image Segmentation

The integration of ML and DL methodologies into medical image segmentation has fundamentally transformed neurosurgical planning and intervention. Historically, segmentation was performed manually by radiologists and neurosurgeons, a process that is both labor-intensive and susceptible to intra- and inter-observer variability [118]. The advent of AI-driven segmentation, particularly with CNNs, has significantly enhanced segmentation accuracy, efficiency, and reproducibility [119].
DL architectures, including U-Net, DeepLabV3+, and vision transformer models, have demonstrated superior capabilities in extracting hierarchical features from MRI and CT scans, facilitating the precise delineation of tumors, white matter lesions, and other anatomically significant brain structures [120,121]. Hybrid approaches that combine CNNs with transformer-based architectures have been developed to improve global contextual awareness while preserving high-resolution local features. Furthermore, DL models trained on extensive, diverse datasets exhibit strong generalization capabilities across different imaging modalities, thereby reducing dependence on manual annotations and enhancing clinical applicability [122].

2.3.2. Mathematical Formulation of CNN-Based Segmentation

Mathematically, CNN-based segmentation can be framed as a pixel-wise classification or regression task. Given an input brain scan X and the corresponding ground truth segmentation mask Y , the objective is to train a function, f θ , such that
Y = f θ X
where X denotes the input brain image, Y is the corresponding ground truth mask, Y represents the predicted segmentation mask and θ denotes learnable params of the network.
Loss functions play a crucial role in optimizing segmentation models. Commonly employed loss functions include the following:
  • Cross-entropy loss, used for pixel-wise classification:
L C E = i ( Y i log Y i + 1 Y i log ( 1 Y i ) )
where Y i indicates the true binary label for pixel, i and Y i is the predicted probability for that pixel. Cross-entropy loss penalizes incorrect classifications at each pixel, making it suitable for binary segmentation tasks.
  • Dice loss, which quantifies the degree of overlap between the predicted and true segmentation masks:
L D i c e = 1 2 Y i Y i Y i + Y i
where Y i and Y i represent the ground truth and predicted values at pixel i, respectively. Dice loss focuses on the spatial overlap between prediction and ground truth, making it effective for handling class imbalance.
  • Focal loss, designed to mitigate class imbalance by down-weighting easily classified examples:
L F o c a l = α 1 Y i γ Y i l o g ( Y i )
where α is a weighting factor to balance class contributions, γ is the focusing parameter that suppresses well-classified examples, and the rest follows the same notation as above. This loss is particularly effective in scenarios with severe foreground-background imbalance.
Training optimization techniques, such as stochastic gradient descent (SGD) and Adam, are employed to iteratively update network parameters and minimize the loss function, ensuring convergence to an optimal segmentation solution.

2.4. Hybrid BCI and Image Segmentation Model for Precision Neurosurgery

As shown in Figure 5, the system workflow begins with real-time neural signal acquisition, where intracranial EEG and functional neuroimaging data are captured to assess the patient’s neurological state. These neural signals are processed using advanced machine learning algorithms, enabling extracting relevant features and classifying critical neural activity patterns. Simultaneously, DL-based image segmentation techniques analyze high-resolution brain scans to delineate surgical targets with sub-millimeter precision. The integration of these two modalities ensures enhanced localization of pathological regions, thereby improving surgical planning and electrode placement accuracy.

2.4.1. System Architecture and Workflow

Hybrid BCI and image segmentation model integrates neural signal acquisition with DL-based medical imaging to enhance precision neurosurgery. The system architecture comprises the following key components:
  • Neural Signal Acquisition: EEG and ECoG signals are collected using high-resolution sensors to capture real-time brain activity. Recent advances in non-invasive and minimally invasive BCI techniques improve spatial resolution and signal fidelity, enabling finer neuro-modulatory applications [79,90,124].
  • Preprocessing Pipeline: Raw neural signals undergo artifact removal, band-pass filtering, and feature extraction to ensure noise-free input for classification. State-of-the-art signal processing frameworks integrate ICA and wavelet decomposition to enhance the robustness of feature extraction [125].
  • DL-Based Image Segmentation: MRI and CT images are processed using transformer-based segmentation models, such as Swin UNETR, for precise delineation of brain structures. The combination of CNNs and self-attention mechanisms significantly improves segmentation accuracy in glioma detection and tumor boundary definition [38].
  • Decision Support System (DSS): The integration of BCI-derived cognitive feedback and AI-based image analysis aids neurosurgeons in optimizing surgical interventions. Multimodal data fusion techniques enhance real-time surgical decision-making, reducing intraoperative errors and improving patient outcomes [126].
  • Cloud Integration: A cloud-based AI/ML framework ensures scalability and real-time computational efficiency. Federated learning models deployed in cloud-based medical AI systems facilitate secure, distributed model training while maintaining patient data privacy [127].

2.4.2. Signal Processing for Real-Time Neurosurgical Assistance

Effective real-time neurosurgical assistance depends on advanced signal-processing techniques. The following methods are employed:
  • Fourier and Wavelet Transforms: Fourier and wavelet transforms are essential mathematical tools for analyzing EEG signals in the frequency domain. The Fourier transform (FT) decomposes EEG waveforms into constituent frequency components, allowing researchers to identify specific oscillatory patterns associated with cognitive processes and motor intentions. However, the FT assumes stationarity in the signal, which is not always applicable to dynamic brain activity [128].
To overcome this limitation, wavelet transform (WT) is utilized, offering superior time-frequency resolution. The WT allows EEG signals to be analyzed across multiple frequency scales, making it particularly useful for detecting transient neurological events such as epileptic seizures, ERPs, and MI tasks in neurosurgical applications. By identifying characteristic frequency bands (e.g., alpha, beta, gamma waves), the system can classify neural states with high accuracy, facilitating real-time decision-making in surgical environments.
  • Independent Component Analysis (ICA): Neural signal recordings, especially EEG, often contain artifacts from non-neural sources such as eye blinks, muscle movements, and external electrical noise. ICA is a powerful statistical technique used to separate and remove these unwanted artifacts while preserving relevant neural information.
ICA operates by assuming that recorded EEG signals are a mixture of independent sources. By applying an optimization algorithm, ICA isolates neural components from noise, significantly enhancing the signal quality for real-time decoding. This method is particularly beneficial in neurosurgical applications where precise neural activity monitoring is critical, as it ensures that the decoded signals accurately reflect the patient’s cognitive state rather than extraneous physiological artifacts [129].
  • Deep Neural Networks (DNNs): Recent advancements in DL have significantly improved EEG-based neural decoding. CNNs and RNNs are particularly effective in extracting spatial and temporal features from EEG data, enabling the classification of brain states with high precision [130].
    CNNs: These networks process EEG signals as spatially structured data, identifying patterns related to motor imagery, cognitive load, and surgical stress responses. CNNs efficiently learn hierarchical representations, making them robust against variations in electrode placement and signal noise.
    RNNs and Long Short-Term Memory (LSTM) Networks: Unlike CNNs, RNNs capture temporal dependencies in EEG signals. LSTM networks, a variant of RNNs, are particularly effective in modeling sequential EEG data, predicting user intent, and tracking dynamic changes in brain activity over time.
By integrating CNNs and RNNs, DL models can classify MI tasks in real time, allowing for precise control of neuroprosthetics, robotic assistants, or surgical guidance systems.
  • Kalman Filters (KFs) and Hidden Markov Models (HMMs): Decoding neural signals in real time involves inherent uncertainty due to noise, signal fluctuations, and measurement errors. KFs and HMMs are probabilistic frameworks designed to address these challenges by smoothing and predicting neural signal patterns.
    Kalman Filters: These are widely used in brain–computer interfaces to estimate dynamic brain states based on noisy EEG measurements. In neurosurgical applications, Kalman filters improve the real-time tracking of neural activity, making it possible to predict intended movements with greater precision.
    HMMs are particularly effective for modeling sequential neural events, such as transitions between different mental states or MI patterns. HMMs assign probabilistic states to EEG sequences, enhancing the accuracy of neurofeedback and BCI-driven assistive technologies.
By combining KFs and HMMs, neurosurgical assistance systems can achieve enhanced robustness, ensuring reliable performance even under variable conditions [131].

2.4.3. Automated Brain Image Analysis Using DL

The automated brain image analysis pipeline integrates state-of-the-art DL models for robust segmentation. Advanced DL techniques are revolutionizing brain image analysis, offering automated, high-accuracy segmentation, classification, and diagnostic insights. The key methodologies used include the following:
  • Transformer-Based Segmentation: Traditional convolutional networks often struggle to maintain spatial consistency in brain MRI segmentation. Transformer-based models such as Swin UNETR and TransUNet address this limitation by incorporating self-attention mechanisms that improve feature representation across long-range spatial dependencies.
    Swin UNETR: A hierarchical vision transformer that refines feature extraction while preserving high-resolution structural details in brain MRI scans.
    TransUNet: A hybrid model that combines CNN feature extraction with transformer-based contextual modeling, leading to superior segmentation accuracy in neurosurgical planning and brain tumor delineation [132].
  • Hybrid Attention Mechanisms: DL-based brain segmentation benefits from hybrid attention models, which combine self-attention (global feature learning) and spatial attention (local feature refinement). This approach enhances the precision of region delineation, crucial for neurosurgical decision-making [133].
  • Self-Supervised Learning (SSL): One major limitation of DL in medical imaging is the reliance on large manually labeled datasets. SSL mitigates this issue by leveraging contrastive learning techniques to pre-train models using unlabeled data. This method significantly reduces annotation requirements while maintaining high segmentation accuracy [134].
  • Multi-Modal Fusion: Combining data from multiple imaging modalities, including MRI, CT, and fMRI, enhances diagnostic accuracy by integrating complementary information. DL models perform multi-modal fusion using attention mechanisms, improving robustness against modality-specific noise and artifacts [135].

2.4.4. Integration with Cloud-Based AI/ML Platforms

To ensure seamless operation in clinical settings, the system is integrated with cloud-based AI/ML platforms. Real-time neurosurgical assistance requires seamless integration with cloud-based AI/ML platforms to enhance computational efficiency, security, and system adaptability.
  • Edge Computing for Low-Latency Processing: To ensure real-time inference in surgical settings, edge computing is employed, enabling on-device processing with minimal latency. This is critical for applications requiring immediate neural signal decoding and feedback mechanisms [136]. Additionally, transformer-based segmentation models used in the system are quantized for on-device inference, enabling real-time processing on edge hardware such as embedded ARM-based systems or neurosurgical workstations with limited GPU capabilities. This significantly reduces latency and reliance on high-bandwidth connectivity, allowing responsive decision support in intraoperative and bedside settings.
  • Federated Learning for Privacy-Preserving AI: Federated learning enables decentralized model training across multiple healthcare institutions while maintaining data privacy. This ensures compliance with regulatory frameworks such as GDPR and HIPAA [137,138].
  • AutoML for Continuous Model Optimization: AutoML techniques automate model selection, hyperparameter tuning, and retraining, allowing continuous improvement of neurosurgical AI models [139]. To further address hardware constraints, knowledge distillation pipelines are employed to generate lightweight student models from large pre-trained segmentation networks. These distilled models retain diagnostic performance while reducing parameter count and computational load, making them suitable for deployment in clinics with modest computational infrastructure. Additionally, AutoML-guided pruning strategies dynamically trim non-contributing network branches, reducing memory footprint and accelerating inference times. To address this, the system integrates adaptive learning mechanisms, including transfer learning and few-shot learning, which enable the model to recalibrate individual neural signatures using minimal new data. This dynamic personalization helps maintain performance despite inter-subject heterogeneity or intra-session variability. Additionally, real-time signal quality estimators such as entropy-based thresholds and SNR filters are incorporated to detect and reject artifact-heavy or physiologically implausible EEG segments before feature extraction. These estimators operate in conjunction with established preprocessing routines—such as ICA, Kalman filtering, and wavelet decomposition—to enhance the reliability of extracted neural features.
  • Blockchain for Data Integrity: Blockchain technology ensures tamper-proof medical records through smart contracts, enhancing transparency and security in neurosurgical data management. Smart contracts ensure unbiased and tamper-proof record-keeping of surgical decisions and patient data [140].
Integration across modalities is achieved through a shared signal coordination framework. EEG-derived cognitive markers (e.g., desynchronization, MI activity) are classified in real time and trigger priority adjustments in the image segmentation module. For instance, the detection of motor preparation or stress indicators can prompt adaptive resegmentation in high-risk areas, with visualization presented via the DSS.

2.5. Performance Evaluation and Statistical Analysis

The evaluation of BCI systems and image segmentation models requires rigorous performance assessment through quantitative metrics and statistical validation. This section outlines the key evaluation metrics for BCI systems and segmentation accuracy, followed by statistical significance testing methods used to validate the experimental results.

2.5.1. Performance Metrics for BCI Systems

BCI systems are evaluated primarily based on their efficiency in translating neural signals into meaningful outputs. ITR quantifies the speed and efficiency of a BCI system in transmitting information. It is measured in bits per minute (bpm) and is calculated using the followinh formula:
I T R = log 2 N + P lo g 2 P + + 1 P lo g 2 ( 1 P N 1 )
where N represents the number of possible classes (commands or selections), and P is the classification accuracy. ITR provides a measure of how effectively a BCI system can communicate information within a given time frame. This metric represents the proportion of correctly classified trials over the total number of trials, expressed as a percentage. It serves as a direct measure of the reliability of the BCI system in distinguishing between different mental states or commands. Higher accuracy indicates better performance in decoding neural signals.

2.5.2. Evaluating Segmentation Accuracy

The accuracy of segmentation models in medical image analysis is commonly evaluated using overlap-based metrics, such as the DSC, and distance-based metrics, such as the Hausdorff Distance. DSC evaluates the spatial overlap between the predicted segmentation (SSS) and the ground truth (GGG) and is computed as follows:
D S C = 2 | S G | S + | G |
where S and G represent the set of predicted pixel and ground truth segmentation respectively. |SG| represents the number of overlapping pixels between the predicted and ground truth regions. A higher DSC value (closer to 1) indicates better segmentation performance. Similarly, Jaccard Index (intersection over union, IoU) The Jaccard Index is another measure of segmentation accuracy, defined as follows:
J a c c a r d = | S G | | S G |
where, as with D S C , S and G represent the predicted and ground truth pixel sets, respectively. The union | S G | counts all pixels present in either set. The Jaccard Index, also known as intersection over union (IoU), penalizes false positives more harshly and is commonly used in segmentation benchmarks.
This metric quantifies the proportion of common pixels between the predicted and actual segmentation masks, with values ranging from 0 to 1, where a higher value represents superior segmentation accuracy. In integrated BCI-segmentation systems, latency alignment between EEG decoding (typically <300 ms) and image processing pipelines (1–2 s) must be evaluated to ensure actionable feedback within neurosurgical timescales.

2.5.3. Statistical Significance Testing

To ensure the robustness of the results and validate the effectiveness of the proposed models, statistical significance tests are conducted. Commonly used statistical tests include the following:
  • Paired t-test: Used when comparing the performance of two models on the same dataset, evaluating whether the mean difference between paired observations is statistically significant.
  • Wilcoxon Signed-Rank Test: A non-parametric alternative to the paired t-test, suitable when the data does not follow a normal distribution.
  • Analysis of Variance (ANOVA): Applied when comparing multiple models or experimental conditions to determine whether significant differences exist among them.
  • Permutation Testing: A robust statistical method used to assess the significance of performance differences by randomly shuffling labels and recalculating metrics to generate a null distribution.
These statistical tests ensure that observed improvements in performance are not due to random variation but rather reflect meaningful differences in model effectiveness. For synthesis and comparative analysis, included studies were grouped into three categories based on the integration modality and outcome metrics: (i) neuroimaging modalities (Table 1), (ii) BCI paradigms and hybrid architectures (Table 2), and (iii) AI-based image segmentation models (Table 3, as shown in Section 3.4 later in the text). This classification facilitated domain-aligned comparison across spatial/temporal resolution, invasiveness, training demands, and performance metrics like DSC and ITR. Studies without quantifiable performance benchmarks were excluded from quantitative synthesis but discussed narratively where they proved to be clinically informative. Grouping was designed to align synthesis structure with technological modality, ensuring consistent comparison across spatial resolution, invasiveness, signal latency, training requirements, and segmentation accuracy.

3. Results and Discussion

The systematic synthesis of literature presented in the preceding sections reveals significant technological convergence between brain–computer interfaces (BCIs) and AI-driven image segmentation frameworks in the context of precision neurosurgery. Collectively, the reviewed studies demonstrate marked improvements in spatial resolution, temporal accuracy, and adaptability when hybrid neuroimaging modalities (e.g., EEG–fNIRS) are integrated with deep learning-based segmentation algorithms. Furthermore, comparative analyses underscore the growing clinical feasibility of such systems, particularly in surgical planning, intraoperative navigation, and neurorehabilitation support. Building on these findings, this section discusses the persistent translational challenges, ethical considerations, and future directions necessary to enable the routine clinical deployment of such systems.
Despite substantial progress in BCI technology, several critical challenges persist, hindering widespread adoption and real-world applicability. Addressing these challenges requires interdisciplinary efforts, integrating advancements in neuroscience, engineering, and artificial intelligence. The integration of BCI systems into real-world applications, particularly in neurosurgical and clinical settings, presents several challenges, including technological, ethical, and regulatory hurdles. This section explores the key obstacles in real-world BCI implementation, ethical concerns surrounding AI-driven neurosurgical systems, and potential future research directions to enhance the efficacy and safety of BCIs.

3.1. Challenges in Real-World Implementation

Implementing BCI systems in real-world clinical environments is challenging due to several factors including technological limitations, user variability, data processing latency, long-term stability, and regulatory acceptance. BCI systems, particularly invasive ones, face challenges related to electrode degradation, biocompatibility issues, and the risks associated with surgical implantation [141]. Non-invasive BCIs, while safer, often suffer from lower signal resolution due to interference from the skull and scalp, which reduces classification accuracy and real-time usability [142]. Non-invasive BCI methods, such as EEG-based systems, suffer from high susceptibility to noise, including muscle artifacts, environmental interference, and overlapping neural signals. This low SNR compromises signal reliability, necessitating advanced denoising algorithms such as wavelet transforms, ICA, and DL-based noise reduction techniques [143]. Improving SNR remains a primary research focus to enhance decoding accuracy and user experience. BCI performance is significantly affected by inter- and intra-user variability, as differences in brain physiology, cognitive states, and learning capabilities lead to inconsistent neural responses [144]. Some users naturally exhibit strong, distinct neural patterns, while others require extensive calibration and training. This variability necessitates adaptive BCI models capable of personalizing algorithms in real time to accommodate individual differences. Emerging approaches such as transfer learning, few-shot learning, and user-specific feature selection hold promise in addressing this issue. Individual neurophysiological differences significantly impact BCI performance. Factors such as cognitive state, age, neurological conditions, and learning adaptation can affect signal reliability and model generalization. To mitigate these limitations, we employ hybrid BCI architectures such as EEG-fNIRS and EEG-fMRI, which combine complementary signal characteristics. EEG contributes high temporal precision, while hemodynamic modalities offer spatial stability. These integrations provide a more robust foundation for cognitive state classification by reducing susceptibility to individual signal degradation or session-based variability. The real-time application of BCIs in neurosurgical procedures requires ultra-fast neural decoding with minimal latency. However, existing machine learning algorithms often struggle with balancing speed and accuracy, limiting practical clinical deployment [145]. Neural tissue reactions to implanted electrodes can cause signal degradation over time, necessitating frequent recalibration or device replacement. Even non-invasive BCIs require extensive training to maintain stable classification performance, which can be burdensome for users [146]. Obtaining regulatory approval from agencies such as the USFDA or the EMA requires extensive clinical trials. Additionally, neurosurgeons and healthcare professionals need specialized training to integrate BCIs into surgical workflows [147].

3.2. Ethical Considerations in AI-Driven Neurosurgical Systems

BCI-driven neurosurgical systems introduce profound ethical dilemmas, including patient autonomy and informed consent, privacy and data security, equity and accessibility, dual-use concerns as well as psychological and social impact. Patients must be fully informed about the capabilities, limitations, and risks associated with BCIs, including potential unintended consequences such as cognitive alterations [148]. BCIs generate highly sensitive neural data that, if improperly stored or shared, could lead to unprecedented privacy breaches. Robust encryption and secure data-sharing frameworks are necessary to prevent unauthorized access [149]. In parallel, the regulatory viability of AI-assisted segmentation systems depends heavily on model explainability. To this end, attention-based transformer models integrated within our proposed system produce interpretable heatmaps and activation trajectories. These visualizations offer clinicians an empirical rationale for segmentation boundaries, supporting human-in-the-loop verification and reducing medicolegal ambiguity. Such explainability layers also align with emerging FDA and CE guidelines that require algorithmic transparency for AI-based diagnostic tools, thereby smoothing the path toward clinical adoption. As BCIs enable direct access to neural activity, concerns regarding data privacy, cognitive autonomy, and the risk of unintended neural information leakage have emerged [150]. Unauthorized access to neural data could lead to intrusive surveillance, neuromarketing, or manipulation, raising profound ethical implications. To mitigate these concerns, we propose integrating blockchain-based audit trails and federated learning architectures within the BCI-AI framework. These technologies preserve data integrity and enable secure, decentralized training without transferring sensitive patient data across institutions. By ensuring patient data never leaves the local site, federated learning satisfies key GDPR and HIPAA requirements, while blockchain ensures tamper-proof traceability of access and modifications.
We also highlight the emerging model of dynamic informed consent, which allows patients to update or revoke data-sharing permissions over time. This paradigm is particularly relevant in BCI systems where longitudinal neural data collection intersects with evolving patient preferences. Moreover, our architecture incorporates human-in-the-loop supervisory controls, ensuring that ultimate decision authority remains with licensed clinicians. This safeguards against liability transfer to automated systems clarifies accountability and supports regulatory approval pathways. Additionally, issues such as mental fatigue, long-term neuroplastic effects, and the potential for subconscious bias in AI-driven BCIs necessitate the development of robust legal and ethical frameworks to protect user rights. Advanced BCIs could be prohibitively expensive, raising concerns about disparities in access to cutting-edge neurosurgical interventions. Ethical frameworks must ensure that these technologies do not exacerbate existing healthcare inequalities [151]. BCI technologies have potential applications beyond medicine, including military and surveillance uses, which raise ethical and human rights concerns [152]. International regulations must prevent misuse while enabling beneficial applications. The integration of AI-driven BCIs into human cognition raises questions about identity, agency, and the potential psychological effects of brain augmentation. [153]. More research is needed to understand the long-term impact on mental health and self-perception.

3.3. Future Research Directions

Advancements in BCI technologies for neurosurgical applications necessitate addressing several critical challenges. Enhancing neural signal acquisition requires the development of biocompatible materials and ultra-sensitive sensors to improve the longevity and stability of neural recordings, ensuring minimal signal degradation over time [154]. Equally important for clinical adoption is the development of hardware-conscious architectures. Future iterations of BCI-integrated AI systems must prioritize computational frugality through model compression, quantization, and dynamic load balancing. These adaptations are crucial to ensure deployment in geographically diverse and economically varied medical environments. Developing novel high-density EEG systems, optically pumped magnetometers (OPMs), and hybrid neuroimaging techniques could improve signal clarity and resolution. Minimally invasive electrodes and wearable dry EEG sensors are also being explored to enhance comfort and usability while maintaining signal fidelity. Improving neural decoding can be achieved by integrating adaptive machine learning models, such as transformer-based architecture and self-supervised learning techniques, which dynamically adjust to individual brain signals, enhancing real-time performance and reducing calibration time [155]. Combining multiple neural modalities such as EEG with fNIRS or EMG can enhance robustness and reduce dependency on single-signal sources. Hybrid BCIs leverage complementary strengths of different modalities, improving classification accuracy and expanding application domains. Furthermore, personalizing BCI systems through adaptive algorithms that continuously learn from individual users can optimize signal classification accuracy, facilitating seamless interaction between patients and neuroprosthetic devices [156,157]. Ethical and regulatory concerns surrounding BCI technologies, including data privacy, informed consent, and equitable access, must be addressed through the establishment of standardized global policies [158]. Additionally, rigorous longitudinal studies must be conducted to assess the long-term neurological and psychological impact of invasive and non-invasive BCI systems before widespread clinical adoption. Advancements in DL, reinforcement learning, and neuroadaptive AI are expected to revolutionize BCI decoding. AI models capable of real-time contextual learning and error correction will improve system reliability, reducing training time and increasing user engagement.
Emerging architectures such as transformer-based EEG decoders and self-supervised learning pipelines have begun to redefine non-invasive BCI accuracy under data-scarce conditions. These models offer improved scalability and generalizability compared to traditional CNN-RNN frameworks. In parallel, federated learning approaches are being explored to enable decentralized model training across clinical centers while preserving patient privacy. As BCI systems move toward adaptive co-learning, the integration of meta-learning and neuroadaptive feedback loops may allow real-time personalization without full retraining.
Despite these advances, several critical challenges remain. These include signal variability across users and sessions, high-latency bottlenecks in closed-loop execution, and the interpretability of deep neural network outputs, especially during high-stakes neurosurgical procedures [159]. Additionally, there is a pressing need for standardized performance benchmarks, such as classification accuracy, latency under real-time constraints, DSC for segmentation tasks, and ITR in control applications. These metrics will be essential in evaluating clinical readiness and reproducibility across research environments.
Recent studies have demonstrated the efficacy of hybrid CNN-RNN architectures for decoding motor intentions with high temporal precision [159], while transformer-based models show promise in non-invasive EEG signal classification [160]. Additionally, closed-loop BCI architectures with adaptive feedback mechanisms will facilitate more intuitive interactions between users and assistive technologies. Emerging works in bidirectional BCI systems have shown improvements in prosthetic control and tactile feedback integration [161]. A collaborative effort among neuroscientists, engineers, clinicians, ethicists, and regulatory bodies is essential to foster responsible innovation in BCI research and implementation. By tackling these challenges, BCI technologies can evolve into reliable, secure, and ethically sound solutions for neurosurgical interventions. Multicenter trials validating real-time BCI-integrated surgical systems are currently sparse but urgently needed [147]. By addressing these challenges and integrating novel technological advancements, BCIs have the potential to transform healthcare, neurorehabilitation, and human–computer interaction, bridging the gap between cognitive intention and external control.

3.4. Translational Impact of AI-Based Segmentation Models in Neurosurgical Practice

AI-driven segmentation has rapidly emerged as a transformative tool in neurosurgical imaging, promising faster and more consistent results than traditional manual methods. Manual segmentation of brain structures (e.g., tumors or anatomical regions) has long been the gold standard but is labor-intensive, requires expert time, and suffers from inter-observer variability [162]. In contrast, automated AI segmentation can achieve expert-level accuracy while significantly reducing time and effort. For example, a recent study reported an AI-assisted tool performing tumor segmentation 37% faster than a conventional manual workflow, with comparable or better accuracy (Dice coefficients ~0.83–0.91 vs. 0.80–0.84) [163]. Such improvements translate to quicker pre-surgical planning and more reproducible measurements, addressing a critical bottleneck in neurosurgical practice. Overall, the use of AI for brain image segmentation has demonstrated high accuracy across neuroanatomical structures and pathologies, while drastically accelerating analysis [164].
In neurosurgery, timely and precise delineation of regions (tumor boundaries, critical structures, etc.) is vital for surgical planning and intraoperative guidance. Traditional manual segmentation by radiologists or surgeons is time-consuming and subject to subjective variations. AI-based segmentation models offer a compelling alternative: once trained, they can delineate complex structures within seconds, producing segmentations with high reproducibility. Studies have shown that automated algorithms can match expert performance in tasks like tumor volumetrics while eliminating inter-rater inconsistencies. Importantly, AI segmentation can be deployed intraoperatively (e.g., on MRI or ultrasound) to provide real-time feedback [165]. By highlighting tumor margins or eloquent cortex on imaging, AI assists surgeons in achieving more complete resections while avoiding critical areas. The consistency of AI outputs also enables longitudinal tracking of disease (tumor growth, edema, etc.) with reduced error. In summary, AI segmentation improves upon manual methods by offering speed, consistency, and scalability—critical factors for translating imaging insights into neurosurgical action [166].
The evolution of deep learning architecture has been central to the success of medical image segmentation in neurosurgery. Table 3 compares several influential AI segmentation models applied in neurosurgical imaging, highlighting their performance on key tasks. The U-Net architecture, with its U-shaped encoder–decoder and skip connections, laid the groundwork by enabling pixel-perfect segmentation with limited data. U-Net quickly became ubiquitous in brain MRI segmentation (e.g., tumor delineation), often achieving Dice scores in the mid-80% range for tumor subregions in the BraTS challenges. Building on this, more recent models incorporate novel mechanisms to capture global context:
  • TransUNet: One of the first architectures to integrate transformers into medical segmentation [167]. TransUNet combines a CNN encoder with a transformer module for long-range dependency capture, and a decoder for precise localization. This hybrid design has shown improved accuracy over pure CNNs—for instance, TransUNet yielded ~1–4% higher Dice scores than the robust nnU-Net on multi-organ and tumor segmentation tasks. In neurosurgical imaging, added self-attention allows better identification of diffuse or irregular tumor margins than convolution alone.
  • Swin UNETR: A 3D segmentation model using a Swin transformer-based encoder with a U-Net style decoder [168]. By employing hierarchical transformers (Swin) that compute self-attention in shifted windows, Swin UNETR excels at capturing multi-scale context in volumetric MRI. This model achieved state-of-the-art performance on the brain tumor segmentation (BraTS) challenge, with reported average Dice scores ~90%+ across tumor subregions. Such performance illustrates the ability of transformer-based models to handle the variable sizes and shapes of neurosurgical pathologies.
  • nnU-Net: A self-configuring framework that automatically tunes the segmentation pipeline to a given dataset. Rather than a novel network architecture, nnU-Net optimizes preprocessing, architecture selection, training, and postprocessing in an all-in-one manner [169]. It has dominated many medical segmentation benchmarks, including neurosurgical tasks, by adapting U-Net variants to the data at hand. Remarkably, nnU-Net out-of-the-box has matched or surpassed custom models on 23 public datasets. In neurosurgical applications (tumor, vessel, and tract segmentation), nnU-Net’s optimized approach yields Dice scores often above 90%, essentially setting a performance ceiling that new architectures strive to beat.
Table 3. Comparative performance of AI segmentation models in neurosurgical imaging.
Table 3. Comparative performance of AI segmentation models in neurosurgical imaging.
ModelKey CharacteristicsExample Application (Dataset)Performance (Dice)Ref.
U-NetCNN encoder–decoder with skip connections; first widely adopted medical segmentation network.Brain tumor MRI segmentation (BraTS) [170]~85% (whole tumor Dice)[171,172]
TransUNetHybrid transformer + U-Net architecture capturing long-range context.Multiorgan CT; also applied to brain tumors.Outperforms basic U-Net (e.g., +1–4% Dice vs. nnU-Net).[167,173]
Swin UNETRSwin transformer encoder with U-Net decoder for 3D volumes.Brain tumors (BraTS 2021)~90–93% (Dice across tumor subregions)[168]
nnU-NetAuto-configuring U-Net pipeline; no manual tuning needed.Multiple (tumors, vessels, etc.—various challenges)~90%+ (top performance on numerous tasks)[159]
As AI models become decision-making aids in neurosurgery, explainable AI (XAI) techniques are crucial for clinical acceptance. Surgeons need to trust and understand the rationale behind an algorithm’s segmentation, especially in high-stakes cases like delineating a tumor next to the motor cortex. Techniques such as gradient-weighted class activation mapping (Grad-CAM) [174,175] generate heatmaps highlighting image regions that most influenced the model’s output. Applying Grad-CAM to a tumor segmentation network, for instance, can show which voxels the model “thinks” are tumorous, offering a sanity check to the surgeon. Variants like Score-CAM [176] go further by avoiding reliance on model gradients, instead using model output scores to produce more stable importance maps. In practice, these XAI overlays can be superimposed on patient images in the operating room or planning software, providing visual explanations for the AI segmentation—- essentially answering why the model marked a region as tumor or normal tissue. Given the engineering orientation of the reviewed studies, certainty assessments using clinical evidence frameworks (e.g., GRADE) were not applicable. Nevertheless, the inclusion of empirical metrics (DSC, ITR) and reproducibility indicators strengthens the internal consistency and translational value of the synthesized findings. While we refrained from applying clinical certainty frameworks such as GRADE, the consistency of performance metrics across independent datasets—alongside convergence of findings across imaging and BCI domains—provides a moderate level of confidence in the robustness and translational relevance of the synthesized evidence Transformer-based models inherently offer another form of interpretability: their attention maps can be visualized to show long-range feature dependencies. For example, the self-attention in TransUNet can be inspected to verify the model is attending to relevant anatomical boundaries and not to artifacts. Such interpretability boosts user confidence and facilitates regulatory approval by demonstrating transparency. Early clinical studies and reviews emphasize that integrating explainability (through Grad-CAM, attention maps, etc.) is essential for surgeon buy-in and for meeting transparency requirements in healthcare.
Bias, Data, and Regulatory Hurdles: Despite impressive progress, significant challenges remain before AI segmentation is fully integrated into routine neurosurgical practice. Data bias and generalizability are major concerns: if a model is trained on imaging data from a narrow demographic or one type of MRI scanner, its performance may drop when encountering different patient populations or institutions. Addressing this requires diverse, high-quality training datasets—which in neurosurgery are often limited due to the relative rarity of certain conditions and the effort needed for expert annotations [177]. Recent analyses highlight the risk of bias and the need for careful curation of training data to ensure fair performance across subgroups. Dataset limitations in neurosurgery (small sample sizes, single-center data) can lead to overfitting [178]; thus, initiatives like international challenges (e.g., BraTS for tumors) and federated learning collaborations are key to pooling data and improving robustness. Another challenge is the regulatory and validation requirements for AI-driven tools. Patient safety mandates rigorous validation of segmentation models under clinically realistic conditions. Regulatory bodies such as the FDA now emphasize Good Machine Learning Practice principles, including the use of representative datasets, independent validation, and human–AI team assessment. AI segmentation software may be classified as a medical device, requiring a demonstration of reliability, transparency, and benefit before approval [179]. The black-box nature of many deep models ties into this—hence the push for explainability and ongoing post-market monitoring of AI performance. Additionally, ethical and legal questions (liability for AI errors, and data privacy) must be resolved. It is widely acknowledged that these technologies should augment, not replace, clinical judgment. Surveys of patients and providers show cautious optimism: while patients find AI-assisted care acceptable, the consensus is that the surgeon must remain in control of final decisions. Ultimately, a combination of technical solutions (for bias mitigation and validation) and updated regulatory frameworks will be required to safely translate AI segmentation from lab to bedside. Looking forward, the intersection of AI segmentation with BCI and other neuro-modulatory technologies opens an exciting frontier for neurosurgical innovation. BCIs provide a direct communication link with the patient’s nervous system—for example, recording brain signals or stimulating the brain—and AI can enhance these interfaces in real time. One visionary concept is a closed-loop surgical guidance system: AI could automatically segment critical structures on imaging and feed this information to a surgeon’s BCI-controlled device.
In theory, a neurosurgeon could use a cortical control signal (captured via a BCI) to adjust an AI-derived hologram or to command a robotic instrument, all while the AI segmentation continuously updates the map of tumor vs. healthy tissue. Early steps toward this synergy are already evident. For instance, researchers have proposed combining BCI feedback with AI vision, using techniques like real-time Grad-CAM to let the brain–computer interface “verify” what the imaging AI is focusing on. Such a hybrid BCI-AI pipeline could theoretically allow neural modulation of the segmentation process—e.g., amplifying regions of interest based on the surgeon’s detected cognitive response or alertness. While these integrations are in nascent stages, they represent a compelling direction for precision neurosurgery. A recent review noted that integrating AI with BCI technology and developing closed-loop control systems are seen as key advances for the next generation of neurotechnology [180]. Envisioned applications include AI-enhanced BCIs for stroke rehabilitation, where segmentation of brain lesions guides adaptive stimulation, or tumor resection where the surgeon’s intent (decoded via BCI) influences the AI’s real-time image analysis. Achieving this will require overcoming substantial technical hurdles (latency, signal noise, interface design), as well as ensuring robust safety mechanisms. Nonetheless, the convergence of AI segmentation models with BCIs, robotics, and augmented reality hints at a future “intelligent neurosurgical cockpit”—one in which surgeons are supported by a seamless loop of brain signals, AI-driven insights, and actuators for unparalleled precision.
AI-based segmentation models are increasingly making the leap from research to clinical neurosurgery, with clear benefits in speed, consistency, and possibly accuracy of anatomical delineation. The translational impact is already evident in neuro-oncology, where deep learning models can detect, segment, and even prognosticate tumors with accuracy on par with experts. As we integrate these models, maintaining surgeon oversight and embedding interpretability will be paramount. In the coming years, we anticipate a paradigm shift toward hybrid systems—for example, AI segmentation combined with surgeon-driven BCIs and advanced visualization—that together elevate the precision of neurosurgical interventions beyond what either human or machine could achieve alone [181]. This symbiosis of human skill and artificial intelligence, under appropriate regulatory and ethical guidance, holds the promise of safer surgeries, improved patient outcomes, and the continued advancement of neurosurgical practice into the era of intelligent therapeutics.

4. Conclusions

This review has examined the intersection of BCIs and AI-driven image segmentation in the context of precision neurosurgery. We have explored how DL techniques, particularly CNNs, enhance neuroimaging modalities such as MRI and CT scans, enabling automated segmentation and precise brain mapping. The discussion has underscored the role of BCIs in neurosurgical procedures, focusing on real-time neural signal processing, robotic-assisted surgery, and AI-enhanced intraoperative decision-making. While these technologies significantly improve surgical precision and patient outcomes, they also present challenges related to signal reliability, latency, computational efficiency, and ethical concerns, particularly regarding data privacy, accessibility, and regulatory oversight.
The integration of BCIs and AI into neurosurgical workflows holds transformative potential. AI-driven image segmentation enhances the accuracy of preoperative planning and intraoperative guidance, reducing human error and ensuring optimal tissue differentiation. BCIs contribute by facilitating real-time neural monitoring, enabling adaptive surgical responses, and assisting patients with severe neurological impairments through neuroprosthetic applications. Intraoperative AI-driven robotic assistance minimizes invasiveness and optimizes precision, leading to reduced recovery times and improved long-term patient outcomes. However, widespread clinical adoption depends on addressing technological constraints, refining machine learning models for real-time adaptation, and developing robust ethical and regulatory frameworks to ensure patient safety and equitable access.
This review was conducted in accordance with the PRISMA 2020 guidelines [182] to ensure methodological transparency across study identification, synthesis, and reporting. The synergy between BCI technology and AI-driven medical image processing represents a paradigm shift in neurosurgical practice. While current advancements demonstrate significant promise, successful translation into the operating room will require not only technical precision but also low-latency architectures, clinically interpretable AI outputs, and validated human–AI workflows. Challenges such as data heterogeneity, regulatory compliance, and ethical transparency must be addressed at scale. Future research must bridge these gaps by prioritizing robust performance benchmarking, interoperable system integration, and real-time surgical feedback loops. With continued interdisciplinary innovation and responsible implementation, BCI-integrated AI systems are poised to redefine precision neurosurgery and patient-centered neurotechnology.

Author Contributions

Conceptualization, S.G. and D.M.; methodology, S.G.; software, not applicable; validation, B.G. and D.M.; formal analysis, D.K.K.; investigation, S.G. and P.S.; resources, P.S.; data curation, P.S.; writing—original draft preparation, S.G. and P.S.; writing—review and editing, D.K.K., B.G., and D.M.; visualization, S.G. and P.S.; supervision, B.G. and D.M.; project administration, D.M.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the European Union–H2020 Teaming programme under Grant nr. 739593 HCEMM.

Data Availability Statement

Not applicable.

Acknowledgments

D.M. acknowledges the support from the Department of Biophysics and Radiation Biology and the National Research, Development and Innovation Office at Semmelweis University, and the Ministry of Innovation. B.G. acknowledges the support from the Lee Kong Chian School of Medicine and Data Science, the AI Research (DSAIR) Centre of NTU, and the Cognitive Neuro Imaging Centre (CONIC) at NTU.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mudgal, S.K.; Sharma, S.K.; Chaturvedi, J.; Sharma, A. Brain Computer Interface Advancement in Neurosciences: Applications and Issues. Interdiscip. Neurosurg. 2020, 20, 100694. [Google Scholar] [CrossRef]
  2. Vidal, J.J. Toward Direct Brain-Computer Communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar] [CrossRef]
  3. Maiseli, B.; Abdalla, A.T.; Massawe, L.V.; Mbise, M.; Mkocha, K.; Nassor, N.A.; Ismail, M.; Michael, J.; Kimambo, S. Brain-Computer Interface: Trend, Challenges, and Threats. Brain Inform. 2023, 10, 20. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Y.; Wang, R.; Gao, X.; Hong, B.; Gao, S. A Practical Vep-Based Brain-Computer Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 234–240. [Google Scholar] [CrossRef]
  5. Ang, K.K.; Guan, C.; Phua, K.S.; Wang, C.; Zhou, L.; Tang, K.Y.; Ephraim Joseph, G.J.; Keong Kuah, C.W.; Geok Chua, K.S. Brain-Computer Interface-Based Robotic End Effector System for Wrist and Hand Rehabilitation: Results of a Three-Armed Randomized Controlled Trial for Chronic Stroke. Front. Neuroeng. 2014, 7, 30. [Google Scholar] [CrossRef] [PubMed]
  6. Yuste, R.; Goering, S.; Agüeray Arcas, B.; Bi, G.; Carmena, J.M.; Carter, A.; Fins, J.J.; Friesen, P.; Gallant, J.; Huggins, J.E.; et al. Four Ethical Priorities for Neurotechnologies and AI. Nature 2017, 551, 159–163. [Google Scholar] [CrossRef]
  7. Brocal, F. Brain-Computer Interfaces in Safety and Security Fields: Risks and Applications. Saf. Sci. 2023, 160, 106051. [Google Scholar] [CrossRef]
  8. Xu, Y.; Quan, R.; Xu, W.; Huang, Y.; Chen, X.; Liu, F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering 2024, 11, 1034. [Google Scholar] [CrossRef]
  9. Khan, R.F.; Lee, B.D.; Lee, M.S. Transformers in Medical Image Segmentation: A Narrative Review. Quant. Imaging Med. Surg. 2023, 13, 8747–8767. [Google Scholar] [CrossRef]
  10. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  11. Liu, Z.; Ding, L.; He, B. Integration of EEG/MEG with MRI and FMRI in Functional Neuroimaging. IEEE Eng. Med. Biol. Mag. 2006, 25, 46. [Google Scholar] [CrossRef]
  12. Christ, P.F.; Ettlinger, F.; Grün, F.; Elshaera, M.E.A.; Lipkova, J.; Schlecht, S.; Ahmaddy, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; et al. Automatic Liver and Tumor Segmentation of CT and MRI Volumes Using Cascaded Fully Convolutional Neural Networks. arXiv 2017, arXiv:1702.05970. [Google Scholar]
  13. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J. Biomed. Health Inform. 2020, 24, 1837–1857. [Google Scholar] [CrossRef]
  14. Hamilton, L.S.; Chang, D.L.; Lee, M.B.; Chang, E.F. Semi-Automated Anatomical Labeling and Inter-Subject Warping of High-Density Intracranial Recording Electrodes in Electrocorticography. Front. Neuroinform. 2017, 11, 272432. [Google Scholar] [CrossRef] [PubMed]
  15. Ruberto, C.D.; Stefano, A.; Comelli, A.; Putzu, L.; Loddo, A.; Kebaili, A.; Lapuyade-Lahorgue, J.; Ruan, S. Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review. J. Imaging 2023, 9, 81. [Google Scholar] [CrossRef]
  16. Chen, X.; Konukoglu, E. Unsupervised Detection of Lesions in Brain MRI Using Constrained Adversarial Auto-Encoders. arXiv 2018, arXiv:1806.04972. [Google Scholar]
  17. Pham, D.L.; Xu, C.; Prince, J.L. Current Methods in Medical Image Segmentation. Annu. Rev. Biomed. Eng. 2000, 2, 315–337. [Google Scholar] [CrossRef]
  18. Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable Deep Learning Models in Medical Image Analysis. J. Imaging 2020, 6, 52. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, H.; Gomez, C.; Huang, C.M.; Unberath, M. Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review. Npj Digit. Med. 2022, 5, 156. [Google Scholar] [CrossRef]
  20. Chatfield, K.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Return of the Devil in the Details: Delving Deep into Convolutional 1505 Nets. arXiv 2014, arXiv:1405.3531. [Google Scholar]
  21. Mahmood, A.; Patille, R.; Lam, E.; Mora, D.J.; Gurung, S.; Bookmyer, G.; Weldrick, R.; Chaudhury, H.; Canham, S.L. Correction: Mahmood et al. Aging in the Right Place for Older Adults Experiencing Housing Insecurity: An Environmental Assessment of Temporary Housing Program. Int. J. Environ. Res. Public Health 2023, 20, 6260. [Google Scholar] [CrossRef]
  22. Fathi Kazerooni, A.; Arif, S.; Madhogarhia, R.; Khalili, N.; Haldar, D.; Bagheri, S.; Familiar, A.M.; Anderson, H.; Haldar, S.; Tu, W.; et al. Automated Tumor Segmentation and Brain Tissue Extraction from Multiparametric MRI of Pediatric Brain Tumors: A Multi-Institutional Study. Neurooncol. Adv. 2023, 5, vdad027. [Google Scholar] [CrossRef]
  23. Belliveau, J.W.; Kennedy, D.N.; McKinstry, R.C.; Buchbinder, B.R.; Weisskoff, R.M.; Cohen, M.S.; Vevea, J.M.; Brady, T.J.; Rosen, B.R. Functional Mapping of the Human Visual Cortex by Magnetic Resonance Imaging. Science 1991, 254, 716–719. [Google Scholar] [CrossRef]
  24. Ogawa, S.; Tank, D.W.; Menon, R.; Ellermann, J.M.; Kim, S.G.; Merkle, H.; Ugurbil, K. Intrinsic Signal Changes Accompanying Sensory Stimulation: Functional Brain Mapping with Magnetic Resonance Imaging. Proc. Natl. Acad. Sci. USA 1992, 89, 5951–5955. [Google Scholar] [CrossRef]
  25. Sterman, M.B.; Friar, L. Suppression of Seizures in Epileptic Following on Sensorimotor EEG Feedback Training. Electroencephalogr. Clin. Neurophysiol. 1972, 33, 89–95. [Google Scholar] [CrossRef] [PubMed]
  26. Alarcon, G.; Garcia Seoane, J.J.; Binnie, C.D.; Martin Miguel, M.C.; Juler, J.; Polkey, C.E.; Elwes, R.D.C.; Ortiz Blasco, J.M. Origin and Propagation of Interictal Discharges in the Acute Electrocorticogram. Implications for Pathophysiology and Surgical Treatment of Temporal Lobe Epilepsy. Brain 1997, 120, 2259–2282. [Google Scholar] [CrossRef]
  27. Pan, R.; Yang, C.; Li, Z.; Ren, J.; Duan, Y. Magnetoencephalography-Based Approaches to Epilepsy Classification. Front. Neurosci. 2023, 17, 1183391. [Google Scholar] [CrossRef] [PubMed]
  28. Schupper, A.J.; Rao, M.; Mohammadi, N.; Baron, R.; Lee, J.Y.K.; Acerbi, F.; Hadjipanayis, C.G. Fluorescence-Guided Surgery: A Review on Timing and Use in Brain Tumor Surgery. Front. Neurol. 2021, 12, 682151. [Google Scholar] [CrossRef] [PubMed]
  29. Hassan, A.M.; Rajesh, A.; Asaad, M.; Nelson, J.A.; Coert, J.H.; Mehrara, B.J.; Butler, C.E. Artificial Intelligence and Machine Learning in Prediction of Surgical Complications: Current State, Applications, and Implications. Am. Surg. 2022, 89, 25. [Google Scholar] [CrossRef]
  30. Spyrantis, A.; Woebbecke, T.; Rueß, D.; Constantinescu, A.; Gierich, A.; Luyken, K.; Visser-Vandewalle, V.; Herrmann, E.; Gessler, F.; Czabanka, M.; et al. Accuracy of Robotic and Frame-Based Stereotactic Neurosurgery in a Phantom Model. Front. Neurorobot. 2022, 16, 762317. [Google Scholar] [CrossRef]
  31. Matsuzaki, K.; Kumatoriya, K.; Tando, M.; Kometani, T.; Shinohara, M. Polyphenols from Persimmon Fruit Attenuate Acetaldehyde-Induced DNA Double-Strand Breaks by Scavenging Acetaldehyde. Sci. Rep. 2022, 12, 10300. [Google Scholar] [CrossRef]
  32. Belkacem, A.N.; Jamil, N.; Khalid, S.; Alnajjar, F. On Closed-Loop Brain Stimulation Systems for Improving the Quality of Life of Patients with Neurological Disorders. Front. Hum. Neurosci. 2023, 17, 1085173. [Google Scholar] [CrossRef] [PubMed]
  33. Mokienko, O.A. Brain-Computer Interfaces with Intracortical Implants for Motor and Communication Functions Compensation: Review of Recent Developments. Mod. Technol. Med. 2024, 16, 78. [Google Scholar] [CrossRef] [PubMed]
  34. Vadhavekar, N.H.; Sabzvari, T.; Laguardia, S.; Sheik, T.; Prakash, V.; Gupta, A.; Umesh, I.D.; Singla, A.; Koradia, I.; Patiño, B.B.R.; et al. Advancements in Imaging and Neurosurgical Techniques for Brain Tumor Resection: A Comprehensive Review. Cureus 2024, 16, e72745. [Google Scholar] [CrossRef]
  35. Livanis, E.; Voultsos, P.; Vadikolias, K.; Pantazakos, P.; Tsaroucha, A. Understanding the Ethical Issues of Brain-Computer Interfaces (BCIs): A Blessing or the Beginning of a Dystopian Future? Cureus 2024, 16, e58243. [Google Scholar] [CrossRef] [PubMed]
  36. Iftikhar, M.; Saqib, M.; Zareen, M.; Mumtaz, H. Artificial Intelligence: Revolutionizing Robotic Surgery: Review. Ann. Med. Surg. 2024, 86, 5401. [Google Scholar] [CrossRef]
  37. Abu Mhanna, H.Y.; Omar, A.F.; Radzi, Y.M.; Oglat, A.A.; Akhdar, H.F.; Al Ewaidat, H.; Almahmoud, A.; Bani Yaseen, A.B.; Al Badarneh, L.; Alhamad, O.; et al. Systematic Review of Functional Magnetic Resonance Imaging (FMRI) Applications in the Preoperative Planning and Treatment Assessment of Brain Tumors. Heliyon 2025, 11, e42464. [Google Scholar] [CrossRef]
  38. Yue, W.; Zhang, H.; Zhou, J.; Li, G.; Tang, Z.; Sun, Z.; Cai, J.; Tian, N.; Gao, S.; Dong, J.; et al. Deep Learning-Based Automatic Segmentation for Size and Volumetric Measurement of Breast Cancer on Magnetic Resonance Imaging. Front. Oncol. 2022, 12, 984626. [Google Scholar] [CrossRef]
  39. Manakitsa, N.; Maraslidis, G.S.; Moysis, L.; Fragulis, G.F. A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies 2024, 12, 15. [Google Scholar] [CrossRef]
  40. Agadi, K.; Dominari, A.; Tebha, S.S.; Mohammadi, A.; Zahid, S. Neurosurgical Management of Cerebrospinal Tumors in the Era of Artificial Intelligence: A Scoping Review. J. Korean Neurosurg. Soc. 2022, 66, 632. [Google Scholar] [CrossRef]
  41. Rayed, M.E.; Islam, S.M.S.; Niha, S.I.; Jim, J.R.; Kabir, M.M.; Mridha, M.F. Deep Learning for Medical Image Segmentation: State-of-the-Art Advancements and Challenges. Inform. Med. Unlocked 2024, 47, 101504. [Google Scholar] [CrossRef]
  42. Ranjbarzadeh, R.; Bagherian Kasgari, A.; Jafarzadeh Ghoushchi, S.; Anari, S.; Naseri, M.; Bendechache, M. Brain Tumor Segmentation Based on Deep Learning and an Attention Mechanism Using MRI Multi-Modalities Brain Images. Sci. Rep. 2021, 11, 10930. [Google Scholar] [CrossRef]
  43. Fariba, K.A.; Gupta, V. Deep Brain Stimulation. In Encyclopedia of Movement Disorders; Lang, A.E., Lozano, A.M., Eds.; Elsevier: Oxford, UK, 2010; Volume 1, pp. 277–282. [Google Scholar] [CrossRef]
  44. Chandra, V.; Hilliard, J.D.; Foote, K.D. Deep Brain Stimulation for the Treatment of Tremor. J. Neurol. Sci. 2022, 435, 120190. [Google Scholar] [CrossRef] [PubMed]
  45. Krüger, M.T.; Kurtev-Rittstieg, R.; Kägi, G.; Naseri, Y.; Hägele-Link, S.; Brugger, F. Evaluation of Automatic Segmentation of Thalamic Nuclei through Clinical Effects Using Directional Deep Brain Stimulation Leads: A Technical Note. Brain Sci. 2020, 10, 642. [Google Scholar] [CrossRef]
  46. Miller, K.J.; Fine, A.L. Decision Making in Stereotactic Epilepsy Surgery. Epilepsia 2022, 63, 2782. [Google Scholar] [CrossRef] [PubMed]
  47. Mirchi, N.; Warsi, N.M.; Zhang, F.; Wong, S.M.; Suresh, H.; Mithani, K.; Erdman, L.; Ibrahim, G.M. Decoding Intracranial EEG With Machine Learning: A Systematic Review. Front. Hum. Neurosci. 2022, 16, 913777. [Google Scholar] [CrossRef] [PubMed]
  48. Courtney, M.R.; Sinclair, B.; Neal, A.; Nicolo, J.P.; Kwan, P.; Law, M.; O’Brien, T.J.; Vivash, L. Automated Segmentation of Epilepsy Surgical Resection Cavities: Comparison of Four Methods to Manual Segmentation. Neuroimage 2024, 296, 120682. [Google Scholar] [CrossRef]
  49. Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognit. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
  50. Mienye, I.D.; Obaido, G.; Jere, N.; Mienye, E.; Aruleba, K.; Emmanuel, I.D.; Ogbuokiri, B. A Survey of Explainable Artificial Intelligence in Healthcare: Concepts, Applications, and Challenges. Inform. Med. Unlocked 2024, 51, 101587. [Google Scholar] [CrossRef]
  51. Liu, Y.; Yu, W.; Dillon, T. Regulatory Responses and Approval Status of Artificial Intelligence Medical Devices with a Focus on China. NPJ Digit. Med. 2024, 7, 255. [Google Scholar] [CrossRef]
  52. Khan, D.Z.; Valetopoulou, A.; Das, A.; Hanrahan, J.G.; Williams, S.C.; Bano, S.; Borg, A.; Dorward, N.L.; Barbarisi, S.; Culshaw, L.; et al. Artificial Intelligence Assisted Operative Anatomy Recognition in Endoscopic Pituitary Surgery. NPJ Digit. Med. 2024, 7, 314. [Google Scholar] [CrossRef]
  53. Nam, S.M.; Byun, Y.H.; Dho, Y.-S.; Park, C.-K. Envisioning the Future of the Neurosurgical Operating Room with the Concept of the Medical Metaverse. J. Korean Neurosurg. Soc. 2025, 68, 137–149. [Google Scholar] [CrossRef] [PubMed]
  54. Brockmeyer, P.; Wiechens, B.; Schliephake, H. The Role of Augmented Reality in the Advancement of Minimally Invasive Surgery Procedures: A Scoping Review. Bioengineering 2023, 10, 501. [Google Scholar] [CrossRef] [PubMed]
  55. Tangsrivimol, J.A.; Schonfeld, E.; Zhang, M.; Veeravagu, A.; Smith, T.R.; Härtl, R.; Lawton, M.T.; El-Sherbini, A.H.; Prevedello, D.M.; Glicksberg, B.S.; et al. Artificial Intelligence in Neurosurgery: A State-of-the-Art Review from Past to Future. Diagnostics 2023, 13, 2429. [Google Scholar] [CrossRef] [PubMed]
  56. Cervera, M.A.; Soekadar, S.R.; Ushiba, J.; Millán, J.d.R.; Liu, M.; Birbaumer, N.; Garipelli, G. Brain–Computer Interfaces for Post-Stroke Motor Rehabilitation: A Meta-Analysis. Ann. Clin. Transl. Neurol. 2018, 5, 651–663. [Google Scholar] [CrossRef]
  57. Caiado, F.; Ukolov, A. The History, Current State and Future Possibilities of the Non-Invasive Brain Computer Interfaces. Med. Nov. Technol. Devices 2025, 25, 100353. [Google Scholar] [CrossRef]
  58. Brookes, M.J.; Leggett, J.; Rea, M.; Hill, R.M.; Holmes, N.; Boto, E.; Bowtell, R. Magnetoencephalography with Optically Pumped Magnetometers (OPM-MEG): The next Generation of Functional Neuroimaging. Trends Neurosci. 2022, 45, 621–634. [Google Scholar] [CrossRef]
  59. Acuña, K.; Sapahia, R.; Jiménez, I.N.; Antonietti, M.; Anzola, I.; Cruz, M.; García, M.T.; Krishnan, V.; Leveille, L.A.; Resch, M.D.; et al. Functional Near-Infrared Spectrometry as a Useful Diagnostic Tool for Understanding the Visual System: A Review. J. Clin. Med. 2024, 13, 282. [Google Scholar] [CrossRef]
  60. Coles, L.; Ventrella, D.; Carnicer-Lombarte, A.; Elmi, A.; Troughton, J.G.; Mariello, M.; El Hadwe, S.; Woodington, B.J.; Bacci, M.L.; Malliaras, G.G.; et al. Origami-Inspired Soft Fluidic Actuation for Minimally Invasive Large-Area Electrocorticography. Nat. Commun. 2024, 15, 6290. [Google Scholar] [CrossRef]
  61. Hong, J.W.; Yoon, C.; Jo, K.; Won, J.H.; Park, S. Recent Advances in Recording and Modulation Technologies for Next-Generation Neural Interfaces. IScience 2021, 24, 103550. [Google Scholar] [CrossRef]
  62. Islam, M.K.; Rastegarnia, A.; Sanei, S. Signal Artifacts and Techniques for Artifacts and Noise Removal. Intell. Syst. Ref. Libr. 2021, 192, 23–79. [Google Scholar] [CrossRef]
  63. Barnova, K.; Mikolasova, M.; Kahankova, R.V.; Jaros, R.; Kawala-Sterniuk, A.; Snasel, V.; Mirjalili, S.; Pelc, M.; Martinek, R. Implementation of Artificial Intelligence and Machine Learning-Based Methods in Brain-Computer Interaction. Comput. Biol. Med. 2023, 163, 107135. [Google Scholar] [CrossRef]
  64. Xu, Y.; Zhou, Y.; Sekula, P.; Ding, L. Machine Learning in Construction: From Shallow to Deep Learning. Dev. Built Environ. 2021, 6, 100045. [Google Scholar] [CrossRef]
  65. Chaudhary, U. Machine Learning with Brain Data. In Expanding Senses Using Neurotechnology; Springer: Berlin/Heidelberg, Germany, 2025; pp. 179–223. [Google Scholar] [CrossRef]
  66. Si-Mohammed, H.; Petit, J.; Jeunet, C.; Argelaguet, F.; Spindler, F.; Evain, A.; Roussel, N.; Casiez, G.; Lecuyer, A. Towards BCI-Based Interfaces for Augmented Reality: Feasibility, Design and Evaluation. IEEE Trans. Vis. Comput. Graph. 2020, 26, 1608–1621. [Google Scholar] [CrossRef]
  67. Kim, S.; Lee, S.; Kang, H.; Kim, S.; Ahn, M. P300 Brain-Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors 2021, 21, 5765. [Google Scholar] [CrossRef]
  68. Farwell, L.A.; Donchin, E. Talking off the Top of Your Head: Toward a Mental Prosthesis Utilizing Event-Related Brain Potentials. Electroencephalogr. Clin. Neurophysiol. 1988, 70, 510–523. [Google Scholar] [CrossRef]
  69. McFarland, D.J.; Wolpaw, J.R. EEG-Based Brain-Computer Interfaces. Curr. Opin. Biomed. Eng. 2017, 4, 194–200. [Google Scholar] [CrossRef]
  70. Awuah, W.A.; Ahluwalia, A.; Darko, K.; Sanker, V.; Tan, J.K.; Tenkorang, P.O.; Ben-Jaafar, A.; Ranganathan, S.; Aderinto, N.; Mehta, A.; et al. Bridging Minds and Machines: The Recent Advances of Brain-Computer Interfaces in Neurological and Neurosurgical Applications. World Neurosurg. 2024, 189, 138–153. [Google Scholar] [CrossRef]
  71. Pfurtscheller, G.; Neuper, C. Motor Imagery Activates Primary Sensorimotor Area in Humans. Neurosci. Lett. 1997, 239, 65–68. [Google Scholar] [CrossRef]
  72. Saibene, A.; Caglioni, M.; Corchs, S.; Gasparini, F. EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review. Sensors 2023, 23, 2798. [Google Scholar] [CrossRef]
  73. Pan, J.; Chen, X.N.; Ban, N.; He, J.S.; Chen, J.; Huang, H. Advances in P300 Brain-Computer Interface Spellers: Toward Paradigm Design and Performance Evaluation. Front. Hum. Neurosci. 2022, 16, 1077717. [Google Scholar] [CrossRef]
  74. Norcia, A.M.; Gregory Appelbaum, L.; Ales, J.M.; Cottereau, B.R.; Rossion, B. The Steady-State Visual Evoked Potential in Vision Research: A Review. J. Vis. 2015, 15, 4. [Google Scholar] [CrossRef]
  75. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
  76. Haslacher, D.; Akmazoglu, T.B.; van Beinum, A.; Starke, G.; Buthut, M.; Soekadar, S.R. AI for Brain-Computer Interfaces. Dev. Neuroethics Bioeth. 2024, 7, 3–28. [Google Scholar] [CrossRef]
  77. Siribunyaphat, N.; Punsawad, Y. Steady-State Visual Evoked Potential-Based Brain—Computer Interface Using a Novel Visual Stimulus with Quick Response (QR) Code Pattern. Sensors 2022, 22, 1439. [Google Scholar] [CrossRef]
  78. Neuper, C.; Müller-Putz, G.R.; Scherer, R.; Pfurtscheller, G. Motor Imagery and EEG-Based Control of Spelling Devices and Neuroprostheses. Prog. Brain Res. 2006, 159, 393–409. [Google Scholar] [CrossRef]
  79. Branco, M.P.; Pels, E.G.M.; Sars, R.H.; Aarnoutse, E.J.; Ramsey, N.F.; Vansteensel, M.J.; Nijboer, F. Brain-Computer Interfaces for Communication: Preferences of Individuals With Locked-in Syndrome. Neurorehabil. Neural Repair 2021, 35, 267–279. [Google Scholar] [CrossRef]
  80. Adewole, D.O.; Serruya, M.D.; Harris, J.P.; Burrell, J.C.; Petrov, D.; Chen, H.I.; Wolf, J.A.; Cullen, D.K. The Evolution of Neuroprosthetic Interfaces. Crit. Rev. Biomed. Eng. 2016, 44, 123. [Google Scholar] [CrossRef]
  81. Collinger, J.L.; Gaunt, R.A.; Schwartz, A.B. Progress towards Restoring Upper Limb Movement and Sensation through Intracortical Brain-Computer Interfaces. Curr. Opin. Biomed. Eng. 2018, 8, 84–92. [Google Scholar] [CrossRef]
  82. Collinger, J.L.; Wodlinger, B.; Downey, J.E.; Wang, W.; Tyler-Kabara, E.C.; Weber, D.J.; McMorland, A.J.C.; Velliste, M.; Boninger, M.L.; Schwartz, A.B. High-Performance Neuroprosthetic Control by an Individual with Tetraplegia. Lancet 2013, 381, 557–564. [Google Scholar] [CrossRef] [PubMed]
  83. Hu, X.; Assaad, R.H. The Use of Unmanned Ground Vehicles (Mobile Robots) and Unmanned Aerial Vehicles (Drones) in the Civil Infrastructure Asset Management Sector: Applications, Robotic Platforms, Sensors, and Algorithms. Expert Syst. Appl. 2023, 232, 120897. [Google Scholar] [CrossRef]
  84. Flesher, S.N.; Downey, J.E.; Weiss, J.M.; Hughes, C.L.; Herrera, A.J.; Tyler-Kabara, E.C.; Boninger, M.L.; Collinger, J.L.; Gaunt, R.A. A Brain-Computer Interface That Evokes Tactile Sensations Improves Robotic Arm Control. Science 2021, 372, 831–836. [Google Scholar] [CrossRef] [PubMed]
  85. Karmakar, S.; Kamilya, S.; Dey, P.; Guhathakurta, P.K.; Dalui, M.; Bera, T.K.; Halder, S.; Koley, C.; Pal, T.; Basu, A. Real Time Detection of Cognitive Load Using FNIRS: A Deep Learning Approach. Biomed. Signal Process. Control. 2023, 80, 104227. [Google Scholar] [CrossRef]
  86. Mughal, N.E.; Khan, M.J.; Khalil, K.; Javed, K.; Sajid, H.; Naseer, N.; Ghafoor, U.; Hong, K.S. EEG-FNIRS-Based Hybrid Image Construction and Classification Using CNN-LSTM. Front. Neurorobot. 2022, 16, 873239. [Google Scholar] [CrossRef]
  87. Murphy, E.; Poudel, G.; Ganesan, S.; Suo, C.; Manning, V.; Beyer, E.; Clemente, A.; Moffat, B.A.; Zalesky, A.; Lorenzetti, V. Real-Time FMRI-Based Neurofeedback to Restore Brain Function in Substance Use Disorders: A Systematic Review of the Literature. Neurosci. Biobehav. Rev. 2024, 165, 105865. [Google Scholar] [CrossRef]
  88. Van Der Lande, G.J.M.; Casas-Torremocha, D.; Manasanch, A.; Dalla Porta, L.; Gosseries, O.; Alnagger, N.; Barra, A.; Mejías, J.F.; Panda, R.; Riefolo, F.; et al. Brain State Identification and Neuromodulation to Promote Recovery of Consciousness. Brain Commun. 2024, 6, fcae362. [Google Scholar] [CrossRef]
  89. Papadopoulos, S.; Bonaiuto, J.; Mattout, J. An Impending Paradigm Shift in Motor Imagery Based Brain-Computer Interfaces. Front. Neurosci. 2022, 15, 824759. [Google Scholar] [CrossRef]
  90. Zhang, Y.; Yagi, K.; Shibahara, Y.; Tate, L.; Tamura, H. A Study on Analysis Method for a Real-Time Neurofeedback System Using Non-Invasive Magnetoencephalography. Electronics 2022, 11, 2473. [Google Scholar] [CrossRef]
  91. Aghajani, H.; Omurtag, A. Assessment of Mental Workload by EEG+FNIRS. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2016, 2016, 3773–3776. [Google Scholar] [CrossRef]
  92. Warbrick, T. Simultaneous EEG-FMRI: What Have We Learned and What Does the Future Hold? Sensors 2022, 22, 2262. [Google Scholar] [CrossRef] [PubMed]
  93. Padmanabhan, P.; Nedumaran, A.M.; Mishra, S.; Pandarinathan, G.; Archunan, G.; Gulyás, B. The Advents of Hybrid Imaging Modalities: A New Era in Neuroimaging Applications. Adv. Biosyst. 2017, 1, 1700019. [Google Scholar] [CrossRef] [PubMed]
  94. Freudenburg, Z.V.; Branco, M.P.; Leinders, S.; van der Vijgh, B.H.; Pels, E.G.M.; Denison, T.; van den Berg, L.H.; Miller, K.J.; Aarnoutse, E.J.; Ramsey, N.F.; et al. Sensorimotor ECoG Signal Features for BCI Control: A Comparison Between People With Locked-In Syndrome and Able-Bodied Controls. Front. Neurosci. 2019, 13, 457334. [Google Scholar] [CrossRef]
  95. Zhao, Z.P.; Nie, C.; Jiang, C.T.; Cao, S.H.; Tian, K.X.; Yu, S.; Gu, J.W. Modulating Brain Activity with Invasive Brain—Computer Interface: A Narrative Review. Brain Sci. 2023, 13, 134. [Google Scholar] [CrossRef]
  96. Alahi, M.E.E.; Liu, Y.; Xu, Z.; Wang, H.; Wu, T.; Mukhopadhyay, S.C. Recent Advancement of Electrocorticography (ECoG) Electrodes for Chronic Neural Recording/Stimulation. Mater. Today Commun. 2021, 29, 102853. [Google Scholar] [CrossRef]
  97. Merk, T.; Peterson, V.; Köhler, R.; Haufe, S.; Richardson, R.M.; Neumann, W.J. Machine Learning Based Brain Signal Decoding for Intelligent Adaptive Deep Brain Stimulation. Exp. Neurol. 2022, 351, 113993. [Google Scholar] [CrossRef]
  98. Rudroff, T. Decoding Thoughts, Encoding Ethics: A Narrative Review of the BCI-AI Revolution. Brain Res. 2025, 1850, 149423. [Google Scholar] [CrossRef]
  99. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in Brain Computer Interface: Challenges and Opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef]
  100. Zhang, H.; Jiao, L.; Yang, S.; Li, H.; Jiang, X.; Feng, J.; Zou, S.; Xu, Q.; Gu, J.; Wang, X.; et al. Brain-Computer Interfaces: The Innovative Key to Unlocking Neurological Conditions. Int. J. Surg. 2024, 110, 5745. [Google Scholar] [CrossRef]
  101. Merk, T.; Peterson, V.; Lipski, W.J.; Blankertz, B.; Turner, R.S.; Li, N.; Horn, A.; Richardson, R.M.; Neumann, W.J. Electrocorticography Is Superior to Subthalamic Local Field Potentials for Movement Decoding in Parkinson’s Disease. Elife 2022, 11, e75126. [Google Scholar] [CrossRef]
  102. Cao, T.D.; Truong-Huu, T.; Tran, H.; Tran, K. A Federated Deep Learning Framework for Privacy Preservation and Communication Efficiency. J. Syst. Archit. 2022, 124, 102413. [Google Scholar] [CrossRef]
  103. Lebedev, M.A.; Nicolelis, M.A.L. Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiol. Rev. 2017, 97, 767–837. [Google Scholar] [CrossRef]
  104. Fick, T.; Van Doormaal, J.A.M.; Tosic, L.; Van Zoest, R.J.; Meulstee, J.W.; Hoving, E.W.; Van Doormaal, T.P.C. Fully Automatic Brain Tumor Segmentation for 3D Evaluation in Augmented Reality. Neurosurg. Focus 2021, 51, E14. [Google Scholar] [CrossRef]
  105. Kazemzadeh, K.; Akhlaghdoust, M.; Zali, A. Advances in Artificial Intelligence, Robotics, Augmented and Virtual Reality in Neurosurgery. Front. Surg. 2023, 10, 1241923. [Google Scholar] [CrossRef]
  106. Zhou, T.; Yu, T.; Li, Z.; Zhou, X.; Wen, J.; Li, X. Functional Mapping of Language-Related Areas from Natural, Narrative Speech during Awake Craniotomy Surgery. Neuroimage 2021, 245, 118720. [Google Scholar] [CrossRef]
  107. Sarubbo, S.; Annicchiarico, L.; Corsini, F.; Zigiotto, L.; Herbet, G.; Moritz-Gasser, S.; Dalpiaz, C.; Vitali, L.; Tate, M.; De Benedictis, A.; et al. Planning Brain Tumor Resection Using a Probabilistic Atlas of Cortical and Subcortical Structures Critical for Functional Processing: A Proof of Concept. Oper. Neurosurg. 2021, 20, E175–E183. [Google Scholar] [CrossRef]
  108. Lachance, B.; Wang, Z.; Badjatia, N.; Jia, X. Somatosensory Evoked Potentials (SSEP) and Neuroprognostication after Cardiac Arrest. Neurocrit. Care 2020, 32, 847. [Google Scholar] [CrossRef]
  109. Nikolov, P.; Heil, V.; Hartmann, C.J.; Ivanov, N.; Slotty, P.J.; Vesper, J.; Schnitzler, A.; Groiss, S.J. Motor Evoked Potentials Improve Targeting in Deep Brain Stimulation Surgery. Neuromodulation 2022, 25, 888–894. [Google Scholar] [CrossRef]
  110. Esfandiari, H.; Troxler, P.; Hodel, S.; Suter, D.; Farshad, M.; Cavalcanti, N.; Wetzel, O.; Mania, S.; Cornaz, F.; Selman, F.; et al. Introducing a Brain-Computer Interface to Facilitate Intraoperative Medical Imaging Control—a Feasibility Study. BMC Musculoskelet Disord 2022, 23, 701. [Google Scholar] [CrossRef]
  111. Mridha, M.F.; Das, S.C.; Kabir, M.M.; Lima, A.A.; Islam, M.R.; Watanobe, Y. Brain-Computer Interface: Advancement and Challenges. Sensors 2021, 21, 5746. [Google Scholar] [CrossRef]
  112. Kim, M.S.; Park, H.; Kwon, I.; An, K.O.; Kim, H.; Park, G.; Hyung, W.; Im, C.H.; Shin, J.H. Efficacy of Brain-Computer Interface Training with Motor Imagery-Contingent Feedback in Improving Upper Limb Function and Neuroplasticity among Persons with Chronic Stroke: A Double-Blinded, Parallel-Group, Randomized Controlled Trial. J. Neuroeng. Rehabil. 2025, 22, 1. [Google Scholar] [CrossRef]
  113. Pignolo, L.; Servidio, R.; Basta, G.; Carozzo, S.; Tonin, P.; Calabrò, R.S.; Cerasa, A. The Route of Motor Recovery in Stroke Patients Driven by Exoskeleton-Robot-Assisted Therapy: A Path-Analysis. Med. Sci. 2021, 9, 64. [Google Scholar] [CrossRef]
  114. Yang, S.; Li, R.; Li, H.; Xu, K.; Shi, Y.; Wang, Q.; Yang, T.; Sun, X. Exploring the Use of Brain-Computer Interfaces in Stroke Neurorehabilitation. Biomed. Res. Int. 2021, 2021, 9967348. [Google Scholar] [CrossRef]
  115. Jin, W.; Zhu, X.X.; Qian, L.; Wu, C.; Yang, F.; Zhan, D.; Kang, Z.; Luo, K.; Meng, D.; Xu, G. Electroencephalogram-Based Adaptive Closed-Loop Brain-Computer Interface in Neurorehabilitation: A Review. Front. Comput. Neurosci. 2024, 18, 1431815. [Google Scholar] [CrossRef]
  116. Mane, R.; Wu, Z.; Wang, D. Poststroke Motor, Cognitive and Speech Rehabilitation with Brain-Computer Interface: A Perspective Review. Stroke Vasc. Neurol. 2022, 7, 541–549. [Google Scholar] [CrossRef]
  117. Zhang, X.; Ma, Z.; Zheng, H.; Li, T.; Chen, K.; Wang, X.; Liu, C.; Xu, L.; Wu, X.; Lin, D.; et al. The Combination of Brain-Computer Interfaces and Artificial Intelligence: Applications and Challenges. Ann. Transl. Med. 2020, 8, 712. [Google Scholar] [CrossRef]
  118. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef]
  119. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lect. Notes Comput. Sci. (Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform.) 2015, 9351, 234–241. [Google Scholar] [CrossRef]
  120. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
  121. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual, 3–7 May 2021. [Google Scholar] [CrossRef]
  122. Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2022; Volume 12962, pp. 272–284. [Google Scholar] [CrossRef]
  123. Paulmurugan, K.; Vijayaragavan, V.; Ghosh, S.; Padmanabhan, P.; Gulyás, B. Brain-Computer Interfacing Using Functional Near-Infrared Spectroscopy (fNIRS). Biosensors 2021, 11, 389. [Google Scholar] [CrossRef]
  124. Peng, C.J.; Chen, Y.C.; Chen, C.C.; Chen, S.J.; Cagneau, B.; Chassagne, L. An EEG-Based Attentiveness Recognition System Using Hilbert-Huang Transform and Support Vector Machine. J. Med. Biol. Eng. 2020, 40, 230–238. [Google Scholar] [CrossRef]
  125. Rakhmatulin, I.; Dao, M.S.; Nassibi, A.; Mandic, D. Exploring Convolutional Neural Network Architectures for EEG Feature Extraction. Sensors 2024, 24, 877. [Google Scholar] [CrossRef]
  126. Peksa, J.; Mamchur, D. State-of-the-Art on Brain-Computer Interface Technology. Sensors 2023, 23, 6001. [Google Scholar] [CrossRef]
  127. Mungoli, N. Scalable, Distributed AI Frameworks: Leveraging Cloud Computing for Enhanced Deep Learning Performance and Efficiency. arXiv 2023, arXiv:2304.13738. [Google Scholar]
  128. Subasi, A. Practical Guide for Biomedical Signals Analysis Using Machine Learning Techniques: A MATLAB Based Approach; Academic Press: Cambridge, MA, USA, 2019; pp. 1–443. [Google Scholar] [CrossRef]
  129. Delorme, A.; Makeig, S. EEGLAB: An Open Source Toolbox for Analysis of Single-Trial EEG Dynamics Including Independent Component Analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  130. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
  131. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep Learning for Sensor-Based Activity Recognition: A Survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef]
  132. Hatamizadeh, A.; Tang, Y.; Yang, D.; Myronenko, A.; Xu, D. Swin UNETR++: Towards More Efficient and Accurate Medical Image Segmentation. In MICCAI BrainLes 2023, Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 14386, pp. 113–124. [Google Scholar] [CrossRef]
  133. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  134. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9726–9735. [Google Scholar] [CrossRef]
  135. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  136. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  137. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–19. [Google Scholar] [CrossRef]
  138. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y.; Gan, C.; Li, K.; Ho, J. Secure Federated Transfer Learning. arXiv 2018, arXiv:1812.03387. [Google Scholar] [CrossRef]
  139. Zoph, B.; Le, Q.V. Neural Architecture Search with Reinforcement Learning. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017—Conference Track Proceedings, Toulon, France, 24–26 April 2017. [Google Scholar]
  140. Chen, Z.; Jing, L.; Li, Y.; Li, B. Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with Foundation Models. Adv. Neural. Inf. Process. Syst. 2023, 36, 79226–79239. [Google Scholar]
  141. Edelman, B.J.; Zhang, S.; Schalk, G.; Brunner, P.; Muller-Putz, G.; Guan, C.; He, B. Non-Invasive Brain-Computer Interfaces: State of the Art and Trends. IEEE Rev. Biomed. Eng. 2025, 18, 26–49. [Google Scholar] [CrossRef] [PubMed]
  142. Simon, C.; Bolton, D.A.E.; Kennedy, N.C.; Soekadar, S.R.; Ruddy, K.L. Challenges and Opportunities for the Future of Brain-Computer Interface in Neurorehabilitation. Front. Neurosci. 2021, 15, 699428. [Google Scholar] [CrossRef]
  143. Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief Review of Image Denoising Techniques. Vis. Comput. Ind. Biomed. Art. 2019, 2, 7. [Google Scholar] [CrossRef] [PubMed]
  144. Chen, Y.; Wang, F.; Li, T.; Zhao, L.; Gong, A.; Nan, W.; Ding, P.; Fu, Y. Considerations and Discussions on the Clear Definition and Definite Scope of Brain-Computer Interfaces. Front. Neurosci. 2024, 18, 1449208. [Google Scholar] [CrossRef] [PubMed]
  145. Peng, W.; Wang, Y.; Liu, Z.; Zhong, L.; Wen, X.; Wang, P.; Gong, X.; Liu, H. The Application of Brain–Computer Interface in Upper Limb Dysfunction after Stroke: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. Front. Hum. Neurosci. 2024, 18, 1438095. [Google Scholar] [CrossRef]
  146. Rajpura, P.; Cecotti, H.; Kumar Meena, Y. Explainable Artificial Intelligence Approaches for Brain-Computer Interfaces: A Review and Design Space. J. Neural. Eng. 2023, 21, 4. [Google Scholar] [CrossRef]
  147. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A Comprehensive Review of EEG-Based Brain–Computer Interface Paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef]
  148. Salles, A.; Farisco, M. Neuroethics and AI Ethics: A Proposal for Collaboration. BMC Neurosci. 2024, 25, 41. [Google Scholar] [CrossRef]
  149. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain-Computer Interfaces for Communication and Rehabilitation. Nat. Rev. Neurol. 2016, 12, 513–525. [Google Scholar] [CrossRef]
  150. Sun, X.-Y.; Ye, B. The Functional Differentiation of Brain-Computer Interfaces (BCIs) and Its Ethical Implications. Humanit. Soc. Sci. Commun. 2023, 10, 878. [Google Scholar] [CrossRef]
  151. Keskinbora, K.H.; Keskinbora, K. Ethical Considerations on Novel Neuronal Interfaces. Neurol. Sci. 2018, 39, 607–613. [Google Scholar] [CrossRef]
  152. Vlek, R.J.; Steines, D.; Szibbo, D.; Kübler, A.; Schneider, M.J.; Haselager, P.; Nijboer, F. Ethical Issues in Brain-Computer Interface Research, Development, and Dissemination. J. Neurol. Phys. Ther. 2012, 36, 94–99. [Google Scholar] [CrossRef] [PubMed]
  153. McIntyre, C.C.; Hahn, P.J. Network Perspectives on the Mechanisms of Deep Brain Stimulation. Neurobiol. Dis. 2010, 38, 329–337. [Google Scholar] [CrossRef]
  154. Borton, D.A.; Yin, M.; Aceros, J.; Nurmikko, A. An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates. J. Neural. Eng. 2013, 10, 026010. [Google Scholar] [CrossRef] [PubMed]
  155. Fernandez-Leon, J.A.; Parajuli, A.; Franklin, R.; Sorenson, M.; Felleman, D.J.; Hansen, B.J.; Hu, M.; Dragoi, V. A Wireless Transmission Neural Interface System for Unconstrained Non-Human Primates. J. Neural. Eng. 2015, 12, 056005. [Google Scholar] [CrossRef]
  156. Yin, M.; Borton, D.A.; Aceros, J.; Patterson, W.R.; Nurmikko, A.V. A 100-Channel Hermetically Sealed Implantable Device for Chronic Wireless Neurosensing Applications. IEEE Trans. Biomed. Circuits Syst. 2013, 7, 115–128. [Google Scholar] [CrossRef]
  157. Gao, Y.; Jiang, Y.; Peng, Y.; Yuan, F.; Zhang, X.; Wang, J. Medical Image Segmentation: A Comprehensive Review of Deep Learning-Based Methods. Tomography 2025, 11, 52. [Google Scholar] [CrossRef]
  158. Ienca, M.; Andorno, R. Towards New Human Rights in the Age of Neuroscience and Neurotechnology. Life Sci. Soc. Policy 2017, 13, 5. [Google Scholar] [CrossRef]
  159. Yao, D.; Koivu, A.; Simonyan, K. Applications of Artificial Intelligence in Neurological Voice Disorders. World J. Otorhinolaryngol. Head Neck Surg. 2025, 9, 100017. [Google Scholar] [CrossRef]
  160. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training Data-Efficient Image Transformers & Distillation Through Attention. In Proceedings of the 38th International Conference on Machine Learning (ICML), Online, 18–24 July 2021; Volume 139, pp. 10347–10357. [Google Scholar]
  161. Wodlinger, B.; Downey, J.E.; Tyler-Kabara, E.C.; Schwartz, A.B.; Boninger, M.L.; Collinger, J.L. Ten-Dimensional Anthropomorphic Arm Control in a Human Brain–Machine Interface: Difficulties, Solutions, and Limitations. J. Neural Eng. 2015, 12, 016011. [Google Scholar] [CrossRef] [PubMed]
  162. Andrews, M.; Di Ieva, A. Artificial intelligence for brain neuroanatomical segmentation in MRI: A literature review. J. Clin. Neurosci. 2025, 134, 111073. [Google Scholar] [CrossRef]
  163. Lee, M.; Kim, J.H.; Choi, W.; Lee, K.H. AI-assisted Segmentation Tool for Brain Tumor MR Image Analysis. J. Imaging Inform. Med. 2024, 38, 74–83. [Google Scholar] [CrossRef] [PubMed]
  164. Kim, H.; Monroe, J.I.; Lo, S.; Yao, M.; Harari, P.M.; Machtay, M.; Sohn, J.W. Quantitative evaluation of image segmentation incorporating medical consideration functions. Med. Phys. 2015, 42, 3013–3023. [Google Scholar] [CrossRef] [PubMed]
  165. Shen, D.; Wu, G.; Suk, H.-I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef]
  166. Yousef, R.; Fahmy, H.; Abdelsamea, M.M.; Hamed, M.; Kim, J. U-Net-Based Models for Optimal MR Brain Image Segmentation. Diagnostics 1858 2023, 13, 1624. [Google Scholar] [CrossRef]
  167. Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y.; et al. TransUNet: Rethinking the U-Net Architecture Design for Medical Image Segmentation Through the Lens of Transformers. Med. Image Anal. 2024, 97, 103280. [Google Scholar] [CrossRef]
  168. Pang, H.; Guo, W.; Ye, C. Multi-Modal Brain MRI Synthesis Based on SwinUNETR. arXiv 2025, arXiv:2506.02467. [Google Scholar] [CrossRef]
  169. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  170. Zhang, W.; Wu, Y.; Yang, B.; Hu, S.; Wu, L.; Dhelim, S. Overview of Multi-Modal Brain Tumor MR Image Segmentation. Healthcare 2021, 9, 1051. [Google Scholar] [CrossRef]
  171. Yousef, R.; Khan, S.; Gupta, G.; Siddiqui, T.; Albahlal, B.M.; Alajlan, S.A.; Haq, M.A.; Ali, A. Bridged-U-Net-ASPP-EVO and Deep Learning Optimization for Brain Tumor Segmentation. Diagnostics 2023, 13, 2633. [Google Scholar] [CrossRef] [PubMed]
  172. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the MICCAI 2016, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar] [CrossRef]
  173. Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y.; et al. 3D TransUNet: Advancing Medical Image Segmentation through Vision Transformers. arXiv 2023, arXiv:2310.07781. [Google Scholar] [CrossRef]
  174. Tang, D.; Chen, J.; Ren, L.; Wang, X.; Li, D.; Zhang, H. Reviewing CAM-Based Deep Explainable Methods in Healthcare. Appl. Sci. 2024, 14, 4124. [Google Scholar] [CrossRef]
  175. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
  176. Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 24–25. [Google Scholar] [CrossRef]
  177. Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention Gated Networks: Learning to Leverage Salient Regions in Medical Images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
  178. Larrazabal, A.J.; Nieto, N.; Peterson, V.; Milone, E.H. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl. Acad. Sci. USA 2020, 117, 12592–12594. [Google Scholar] [CrossRef]
  179. Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. [Google Scholar] [CrossRef]
  180. European Commission. Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). COM/2021/206 final. Brussels. 21 April 2021. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 10 March 2025).
  181. Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.H.; Faubert, J. Deep learning-based electroencephalography analysis: A systematic review. J. Neural Eng. 2019, 16, 051001. [Google Scholar] [CrossRef]
  182. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flow diagram depicting the identification, screening, eligibility, and inclusion phases for studies reviewed in this paper.
Figure 1. PRISMA 2020 flow diagram depicting the identification, screening, eligibility, and inclusion phases for studies reviewed in this paper.
Surgeries 06 00050 g001
Figure 2. 2 sets of 4 MRI modalities and Z-Score normalization. (Reproduced from [42], Springer Nature, 2021).
Figure 2. 2 sets of 4 MRI modalities and Z-Score normalization. (Reproduced from [42], Springer Nature, 2021).
Surgeries 06 00050 g002
Figure 3. Clinical testing outcomes, including side-effect thresholds and volumetric tissue activation (VTA) models for the right lead. The figure displays stimulation in anterior (AA″), lateral (BB″), and posterior (CC″) orientations, where subpanels (A,B,C) indicate the corresponding thresholds for facial (VPM, yellow), (A′,B′,C′) denote hand (VPL, green), and (A″,B″,C″) represent internal-capsule (dysarthria, red) stimulation. White arrows indicate segmentation borders that match VTA models and clinical effects; black arrows highlight mismatches. (Reproduced from [45], MDPI, 2020).
Figure 3. Clinical testing outcomes, including side-effect thresholds and volumetric tissue activation (VTA) models for the right lead. The figure displays stimulation in anterior (AA″), lateral (BB″), and posterior (CC″) orientations, where subpanels (A,B,C) indicate the corresponding thresholds for facial (VPM, yellow), (A′,B′,C′) denote hand (VPL, green), and (A″,B″,C″) represent internal-capsule (dysarthria, red) stimulation. White arrows indicate segmentation borders that match VTA models and clinical effects; black arrows highlight mismatches. (Reproduced from [45], MDPI, 2020).
Surgeries 06 00050 g003
Figure 4. Automated segmentation of epilepsy resection cavities across four patient samples using five computational methods compared against original MRI. Columns show, from left to right: T1 MPRAGE baseline, manual segmentation, and results from Epic-CHOP (A), ResectVol (B), Deep Resection (C), and Resseg (D) algorithms. Rows 1–4 correspond to four different patients. Colored overlays indicate segmented resection zones: pink (Epic-CHOP), purple (ResectVol), yellow (Deep Resection), and blue (Resseg). (Reproduced from [48], Elsevier, 2024).
Figure 4. Automated segmentation of epilepsy resection cavities across four patient samples using five computational methods compared against original MRI. Columns show, from left to right: T1 MPRAGE baseline, manual segmentation, and results from Epic-CHOP (A), ResectVol (B), Deep Resection (C), and Resseg (D) algorithms. Rows 1–4 correspond to four different patients. Colored overlays indicate segmented resection zones: pink (Epic-CHOP), purple (ResectVol), yellow (Deep Resection), and blue (Resseg). (Reproduced from [48], Elsevier, 2024).
Surgeries 06 00050 g004
Figure 5. Hybrid system architecture integrating EEG-based cognitive state decoding with AI-driven brain image segmentation for adaptive surgical decision support. (Reproduced from [123], MDPI, 2021).
Figure 5. Hybrid system architecture integrating EEG-based cognitive state decoding with AI-driven brain image segmentation for adaptive surgical decision support. (Reproduced from [123], MDPI, 2021).
Surgeries 06 00050 g005
Table 1. Comparative characteristics of commonly used neuroimaging modalities in neurosurgical applications.
Table 1. Comparative characteristics of commonly used neuroimaging modalities in neurosurgical applications.
ModalitySpatial ResolutionTemporal ResolutionInvasivenessClinical Utility
EEGLow (~10–30 mm)High (~1 ms)Non-invasiveReal-time monitoring, BCI
fNIRSModerate (~10 mm)Moderate (~100 ms)Non-invasiveHemodynamic response analysis
fMRIHig (~1–2 mm3)Low (~2–3 ms)Non-invasiveFunctional and anatomical mapping.
CTVery high (~0.5–1 mm)None (static)Non-invasiveStructural imaging, intraoperative guidance
PETLow (~4–6 mm)Very low (~minutes)Semi-invasiveMetabolic imaging, tumor detection
Note: values are approximate and may vary by equipment and imaging protocol.
Table 2. Comparative summary of BCI signal paradigms and modalities.
Table 2. Comparative summary of BCI signal paradigms and modalities.
Paradigm/ModalitySignal SourceTypeTemporal ResolutionSpatial ResolutionInvasivenessTraining RequiredClinical ApplicationsNotable Limitations
Motor Imagery (MI)EEGEndogenous~300 ms–1 sLow (cm-level)Non-invasiveHigh (weeks)Neuroprosthetics, robotic controlLong training, high variability
P300 ERPEEGExogenous (event-based)~300 msLow-moderateNon-invasiveLowCommunication interfaces (e.g., spellers)Slower ITR, stimulus dependency
SSVEPEEGExogenous (frequency-coded)~100–200 msLowNon-invasiveVery lowHigh-speed selection (spellers, AR)Requires sustained gaze, limited in visually impaired
fNIRSHemodynamicExogenous (oxy-Hb response)~2–5 sModerate (1–3 cm)Non-invasiveLow-moderateCognitive load detection, BCI-fNIRS hybridsPoor temporal resolution
ECoGCortical surfaceEndogenous~50–100 msHigh (mm-level)Minimally invasiveModerateSeizure mapping, high-resolution BCIsSurgical access required
LFPDeep brain regionsEndogenous~10–50 msVery high (sub-mm)InvasiveModerateParkinson’s, closed-loop DBS systemsDeep implantation risk
EEG-fNIRS HybridEEG + fNIRSMultimodal~200 ms–5 sImproved over single-modalityNon-invasiveModerateEnhanced classification, error detectionSignal fusion complexity
EEG-fMRI HybridEEG + fMRIMultimodalEEG: ~ms, fMRI: ~2sVery high (fMRI)Non-invasiveHighCognitive neuroscience, task mappingInfrastructure, synchronization issues
Invasive Hybrid (e.g., ECoG + LFP)Cortical + subcorticalMultimodal~10–100 msUltra-highHighly invasiveModeratePrecision neuroprostheticsEthical and surgical constraints
Performance ranges based on representative data from Section 3.2 and Section 3.3. Values vary by hardware, patient condition, and use-case domain.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghosh, S.; Sindhujaa, P.; Kesavan, D.K.; Gulyás, B.; Máthé, D. Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches. Surgeries 2025, 6, 50. https://doi.org/10.3390/surgeries6030050

AMA Style

Ghosh S, Sindhujaa P, Kesavan DK, Gulyás B, Máthé D. Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches. Surgeries. 2025; 6(3):50. https://doi.org/10.3390/surgeries6030050

Chicago/Turabian Style

Ghosh, Sayantan, Padmanabhan Sindhujaa, Dinesh Kumar Kesavan, Balázs Gulyás, and Domokos Máthé. 2025. "Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches" Surgeries 6, no. 3: 50. https://doi.org/10.3390/surgeries6030050

APA Style

Ghosh, S., Sindhujaa, P., Kesavan, D. K., Gulyás, B., & Máthé, D. (2025). Brain-Computer Interfaces and AI Segmentation in Neurosurgery: A Systematic Review of Integrated Precision Approaches. Surgeries, 6(3), 50. https://doi.org/10.3390/surgeries6030050

Article Metrics

Back to TopTop