Next Article in Journal
Image Sensor-Based Three-Dimensional Visible Light Positioning for Various Environments
Previous Article in Journal
The Influence of Dimensional Parameters on the Characteristics of Magnetic Flux Concentrators Used in Tunneling Magnetoresistance Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Prior Knowledge in a Hybrid Network for Multimodal Brain Tumor Segmentation

by
Gangyi Zhou
,
Xiaowei Li
,
Hongran Zeng
,
Chongyang Zhang
,
Guohang Wu
and
Wuxiang Zhao
*
College of Electronics and Information Engineering, Sichuan University, Chengdu 610017, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4740; https://doi.org/10.3390/s25154740
Submission received: 18 March 2025 / Revised: 23 April 2025 / Accepted: 28 April 2025 / Published: 1 August 2025
(This article belongs to the Section Biomedical Sensors)

Abstract

Recent advancements in deep learning have significantly enhanced brain tumor segmentation from MRI data, providing valuable support for clinical diagnosis and treatment planning. However, challenges persist in effectively integrating prior medical knowledge, capturing global multimodal features, and accurately delineating tumor boundaries. To address these challenges, the Hybrid Network for Multimodal Brain Tumor Segmentation (HN-MBTS) is proposed, which incorporates prior medical knowledge to refine feature extraction and boundary precision. Key innovations include the Two-Branch, Two-Model Attention (TB-TMA) module for efficient multimodal feature fusion, the Linear Attention Mamba (LAM) module for robust multi-scale feature modeling, and the Residual Attention (RA) module for enhanced boundary refinement. Experimental results demonstrate that this method significantly outperforms existing approaches. On the BraT2020 and BraT2023 datasets, the method achieved average Dice scores of 87.66% and 88.07%, respectively. These results confirm the superior segmentation accuracy and efficiency of the approach, highlighting its potential to provide valuable assistance in clinical settings.

1. Introduction

Brain tumors, abnormal tissue growths within the cranial cavity, can be either benign or malignant. They may originate from brain cells or metastasize from cancer cells in other parts of the body [1]. The pathogenesis of brain tumors is complex, and their growth can increase intracranial pressure, causing symptoms such as headaches, nausea, and vomiting. Additionally, they can affect brain regions responsible for cognitive, sensory, and motor functions, leading to cognitive impairment, sensory loss, and motor dysfunction. These characteristics make brain tumors a significant health concern. Although brain tumors are not the most common type of cancer globally, their unique anatomical location and complex biological characteristics often result in a marked reduction in patient survival rates, increased treatment difficulty, and substantial impacts on quality of life [2].
Magnetic Resonance Imaging (MRI) [3] is a cutting-edge medical imaging technology widely used in brain tumor diagnosis due to its exceptional soft tissue contrast resolution. MRI provides detailed visualizations of tumor size and shape, as well as relationships between the tumor and surrounding brain tissues, aiding physicians in assessing tumor nature and location and offering precise navigational information for preoperative planning. MRI typically includes four modalities [4]: T1-weighted, contrast-enhanced T1-weighted, T2-weighted, and T2-weighted Fluid-Attenuated Inversion Recovery (FLAIR) images, as shown in Figure 1. However, the tumor regions in these multimodal images often exhibit complex morphological and structural features, particularly in cases with small lesions and indistinct boundaries, posing challenges for segmentation accuracy.
In recent years, with the advancement of artificial intelligence, particularly deep learning, significant progress has been made in brain tumor segmentation. Deep learning models such as U-Net [5] have demonstrated remarkable capability in medical image analysis, accurately segmenting brain tumors’ location and shape from complex MRI data. This technology plays a crucial role in assisting clinicians with diagnosis and treatment planning. Brain tumor segmentation not only enhances diagnostic precision but also significantly improves workflow efficiency. Automating image processing alleviates the workload on physicians, allowing them to dedicate more time to patient care and decision making.
Despite these advancements, existing MRI-based brain tumor segmentation methods face numerous challenges. Firstly, many current approaches fail to fully leverage prior medical knowledge, limiting segmentation accuracy. Incorporating and utilizing such prior knowledge can effectively enhance segmentation precision. Secondly, in multimodal medical segmentation tasks, existing methods often struggle to capture global information from multimodal features, making it difficult to fully integrate the rich information across different modalities. Additionally, many algorithms lack the ability to accurately capture fine details of tumor boundaries, especially when dealing with blurred edges or small lesions, resulting in less refined segmentation outcomes.
To tackle the challenges of multimodal brain tumor segmentation, this paper proposes a novel approach named Hybrid Network for Multimodal Brain Tumor Segmentation (HN-MBTS). This method leverages prior medical knowledge to enhance global feature extraction from multimodal data and improve the precision of boundary detail segmentation. The main contributions of this work are outlined as follows:
  • A Two-Branch, Two-Model Attention (TB-TMA) module is introduced, utilizing prior medical knowledge to reorganize multimodal inputs based on clinical relevance. Single-modality features and cross-modality information are effectively modeled and fused through independent branches and cross-attention mechanisms, significantly enhancing cross-modal feature representation.
  • A Linear Attention Mamba (LAM) module is proposed, incorporating a linear attention mechanism to improve the computational efficiency of multi-scale feature modeling. This module is designed to enhance the network’s adaptability to large-scale multimodal data, ensuring effective learning of complex features.
  • A Residual Attention (RA) module is developed to capture fine details in segmentation boundary regions. Multiple attention units are stacked, and a residual structure is utilized to further improve segmentation accuracy, providing precise feature extraction for boundary refinement.

2. Related Work

2.1. CNN-Based Medical Brain Tumor Segmentation

CNN-based models have demonstrated impressive performance in brain tumor segmentation [6]. Due to computational and memory constraints, early CNN-based methods [7,8] focused on segmenting 2D MRI slices. Subsequently, a growing number of three-dimensional brain tumor segmentation (BTS) models have been proposed. 3D U-Net [9], a fully automated organ segmentation model based on 3D convolutional neural networks, employs 3D convolutions to process 3D images, preserving spatial information and enhancing segmentation accuracy and robustness. The model integrates features from both encoder and decoder structures, following the U-Net architecture. V-Net [10] extends the 3D convolution framework by incorporating the U-Net structure, offering a network architecture for 3D image segmentation. It also introduces the Dice coefficient loss function to address the class imbalance problem in segmentation samples. Ramin et al. [11] proposed an automatic and robust brain tumor segmentation framework using an optimized convolutional neural network (CNN). The weights and biases of the network are fine-tuned through the Improved Chimp Optimization Algorithm (IChOA) to handle multimodal data. AD-Net [12] introduces an automatic weighted dilated convolutional network that learns multimodal brain tumor features through channel-wise feature separation. A novel method proposed in [13] utilizes two sub-networks within the projected cascaded convolutional neural network framework: the Tumor Localization Network (TLN) and the LSIS-based Intra-tumor Segmentation Network (LITSN), effectively segmenting tumors with high-grade gliomas from 3D MRI data.
CNN-based brain tumor segmentation models have made significant strides in the field of medical image segmentation, evolving from early 2D segmentation models to more sophisticated 3D convolutional neural networks. These models have progressively improved segmentation accuracy and robustness by integrating various network architectures and optimization techniques. However, despite their effectiveness in handling single-modality data, fully leveraging multimodal information remains an open challenge.

2.2. Transformer-Based Medical Brain Tumor Segmentation

An increasing number of medical image segmentation studies have explored network architectures based on Vision Transformers (ViTs). UNETR [14] employs a Transformer in the encoder to extract features from 3D brain tumor images, connecting features at different resolutions to the decoder via skip connections, thereby capturing global contextual information across multiple scales. Similarly, Swin UNETR [15] replaces standard convolutions in the encoder with the Swin Transformer [16], leveraging its hierarchical structure. Li et al. [17] introduced a hybrid approach by integrating a Transformer at the bottleneck of an encoder–decoder architecture to combine the advantages of both 3D convolutions and Transformers. Peiris et al. [18] proposed a volumetric Transformer for 3D tumor segmentation, utilizing the self-attention mechanism of Transformers in the encoder to capture both local and global features. The decoder incorporates self-attention and cross-attention mechanisms to capture fine-grained features.
The Transformer architecture is becoming a research focal point in the field of medical image segmentation, demonstrating significant potential in 3D brain tumor segmentation due to its powerful global feature extraction capabilities. However, existing Transformer-based methods still face challenges in capturing fine details and effectively integrating multimodal data.

2.3. Multimodal Medical Segmentation Methods

Zhang [19] proposed the Multimodal Contrastive Domain Sharing (Multi-ConDoS) GAN network to achieve effective multimodal, contrastive, self-supervised medical image segmentation. This approach leverages multimodal medical images to learn more comprehensive object features through multimodal contrastive learning. To address the fusion of Positron Emission Tomography (PET) and Computed Tomography (CT) data, Marinov [20] introduced Mirror U-Net, which decomposes multimodal representations into modality-specific decoder branches and an auxiliary multimodal decoder, replacing traditional fusion methods with multimodal fission. Pandey [21] presented a multimodal medical segmentation method combining YOLOv8 and SAM, enabling the segmentation of regions of interest (ROIs) across various medical imaging datasets, including ultrasound readings, CT scans, and X-ray images. Andrade [22] described a unified framework for various Transformer-based architectures, exploring the performance differences between single-path and multi-path encoders and examining the impact of multi-stream interaction on multimodal Transformers.
Multimodal medical segmentation methods aim to enhance segmentation performance by integrating information from different modalities. However, current approaches still face limitations in capturing both global information and fine details of multimodal features. The hybrid network proposed in this paper leverages prior knowledge to organically combine information from different modalities, guiding the model to achieve more precise and detailed brain tumor segmentation.

3. Method

This paper introduces HN-MBTS (Hybrid Network for Multimodal Brain Tumor Segmentation), a hybrid network designed to efficient process multimodal brain tumor data to achieve precise segmentation. HN-MBTS is built on an encoder–decoder structure and incorporates three core innovative modules: two-branch, two-model attention (TB-TMA), linear attention Mamba (LAM), and residual attention (RA). The architecture is illustrated in Figure 2.
First, the TB-TMA module leverages prior medical knowledge to reorganize multimodal inputs into two clinically relevant groups. Each group is processed through an independent branch, where a self-attention module models long-range dependencies and extracts local features for single-mode data. A cross-attention module then integrates information across modalities, effectively capturing inter-modality correlations and significantly enhancing cross-modality feature representation. Next, features are passed to the LAM module, which introduces a linear attention mechanism to improve the computational efficiency of multi-scale feature modeling. This module efficiently models multimodal data by incorporating prior knowledge, enhancing the network’s adaptability to large-scale data while maintaining robust learning capabilities for complex multimodal features. Finally, the RA module combines prior medical knowledge with multiple stacked attention units to further enhance boundary detail capture. The RA module flexibly captures discriminative features across different scales and dimensions. Particularly in the boundary regions, the residual structure improves segmentation precision by effectively modeling detailed features.
In the decoding phase, HN-MBTS progressively restores spatial resolution and aggregates multi-scale features from the encoding phase through attention mechanisms. This leads to optimized feature fusion and ultimately generates high-precision brain tumor segmentation results. This network design, guided by prior medical knowledge, integrates efficient multimodal feature modeling with precise boundary capture, significantly improving performance in brain tumor segmentation tasks.

3.1. TB-TMA Module

In multimodal brain tumor segmentation tasks, different modalities often contain complementary information. However, directly fusing multimodal data can introduce redundancy and noise, adversely affecting segmentation accuracy. Thus, effectively extracting and integrating multimodal features becomes a critical challenge. Inspired by radiologists’ approach to evaluating brain tumor MRI images, this paper leverages prior knowledge to enable the model to learn spatial and structural associations between related sequences. The assessment of brain tumors typically relies on the combined interpretation of different modalities. For instance, T1-weighted (T1) and T1-weighted with contrast enhancement (T1Gd) imaging are used to assess blood–brain barrier disruption and define the tumor core, whereas T2-weighted (T2) and T2 FLAIR (fluid-attenuated inversion recovery) imaging are employed to detect free water and distinguish tumor necrosis from vasogenic edema.
Based on this clinical knowledge, the four image modalities— X T 1 , X T 1 Gd , X T 2 , and X T 2 FLAIR are reorganized into two groups of related modalities: (1) X T 1 and X T 1 Gd and (2) X T 2 and X T 2 FLAIR . Each group is processed independently within the model.
Unlike existing brain tumor segmentation models that concatenate all input modalities and feed them into the model simultaneously, this reorganization allows the model to better capture the correlations between modalities, leading to more accurate tumor region segmentation. The formulation is as expressed as follows:
S = f ours ( θ , ( { X T 1 , X T 1 Gd } , { X T 2 , X T 2 FLAIR } ) )
To fully exploit the complementary information between different modalities in multimodal brain tumor segmentation tasks, this paper introduces the two-branch, two-model attention (TB-TMA) module, as illustrated in Figure 3. The design of the TB-TMA module is inspired by clinical knowledge, particularly the combined interpretation of different modalities in radiology. By dividing the input data into two independent branches and extracting features for each modality separately, TB-TMA effectively captures the relationships and complementarities between modalities. The multimodal brain tumor images are divided into two branches, which first pass through a conv block. The conv block is a convolutional module designed for the initial extraction of features from the input images, as shown in Figure 4. Its primary function is to transform the raw input data into feature representations more suitable for subsequent extraction and processing.
The TB-TMA module consists of two main components: the self-attentional model and the cross-attentional model. The self-attentional model focuses on modeling long-range dependencies and extracting local features within each modality independently. In contrast, the cross-attentional model leverages cross-modal information fusion to enhance the representation of features by capturing the correlations between different modalities. This dual attention mechanism ensures that the model can effectively utilize the complementary information provided by the multimodal data, leading to more accurate segmentation of brain tumor regions.

3.1.1. Self-Attentional Model

The self-attentional model is designed to independently extract modality-specific features within each branch. It begins with deep convolution operations on the input images, transforming them into one-dimensional representations. Subsequently, a multi-head self-attention (MSA) mechanism [23] is employed to capture global features by computing self-attention across different positions. These features are then normalized using Layer Normalization (L-Norm). Finally, local features are further extracted using the Fused-MBConv module [24] to enhance the feature representation capabilities. The process is described by the following formulas:
F Tl l = MSA ( LN ( F Tl l 1 ) ) + F Tl l 1 F Tl l + 1 = Fused MBConv ( LN ( F Tl l ) ) + F Tl l
F TlGd l = MSA ( LN ( F TlGd l 1 ) ) + F TlGd l 1 F TlGd l + 1 = Fused MBConv ( LN ( F TlGd l ) ) + F TlGd l
Here, MSA · denotes the multi-head self-attention module, which captures global feature dependencies by computing self-attention across different positions. Fused - MBConv · is typically used for efficient feature extraction, leveraging depthwise separable convolutions to reduce computational complexity.

3.1.2. Cross-Attentional Model

Building upon the features extracted by the self-attentional model, the cross-attentional model introduces a cross-attention mechanism to capture complementary information between different modalities. The features from both branches are normalized, then interact through a cross-attention layer to generate a fused feature representation. After applying layer normalization (LN) to the input feature maps, the standard multi-head self-attention module computes the relationships between features. The outputs of the cross-modality self-attention for Tl and TlGd are added to the self-attention outputs and the original input features, forming the fused features:
M Tl , M TlGd = CM MSA ( LN ( F Tl l + 1 ) , LN ( F TlGd l + 1 ) )
M Tl = SoftMax ( Q Tl K TlGd T d + B ) V TlGd
M TlGd = SoftMax ( Q TlGd K Tl T d + B ) V Tl
The fused feature maps are normalized again and further refined using the Fused-MBConv module. Finally, the output features are connected with the input features through a residual connection, producing the final feature outputs F Tl l + 3 and F TlGd l + 3 . After processing through the TB-TMA module, the task of the bottleneck layer is to concatenate the features from the four different modalities and further compress and fuse these concatenated features to effectively extract and express multimodal information:
F concat = Concat ( F T 1 l + 3 , F TlGd l + 3 , F T 2 l + 3 , F T 2 FLAIR l + 3 )
A 1 × 1 convolution is then applied to the concatenated features for channel compression:
F bottleneck = Conv 1 x 1 ( F concat )

3.2. LAM Module

Mamba [25] is an innovative deep learning architecture designed to efficiently handle long sequence data. It achieves this through the Selective State Space Model (SSM), which captures temporal dependencies in sequential data and dynamically adjusts the parameters of the state space using a selective mechanism to manage long sequences efficiently. The model leverages a multi-head mechanism to process features from different modalities or scales in parallel and employs a gating mechanism for selective feature fusion, thereby enhancing performance in visual tasks. The core formula is expressed as follows:
h t + 1 = h e a d = 1 H α ( h e a d ) A d ( h e a d ) h t + B d ( h e a d ) x t
Here, h t + 1 represents the updated feature state. A d ( head ) and B d ( head ) are the feature transition and input mapping matrices for each head, respectively. α ( head ) is the weight coefficient that governs the selective fusion of feature representations from different heads.
In traditional 3D medical image segmentation tasks, such as UNETR, the input resolution is typically downsampled from D × H × W to D 16 × H 16 × W 16 to reduce sequence length and enhance computational efficiency. However, this direct downsampling approach presents several limitations. It often results in a significant loss of detail in the input features, particularly restricting the expression of multi-scale features, which are crucial for accuracy in segmentation tasks. Capturing structural information at varying scales is essential for precise segmentation. Furthermore, the representation of global features may be insufficient, adversely affecting the model’s segmentation performance.
The linear attention mechanism(LAM) module builds upon the traditional Mamba framework by incorporating a linear attention mechanism to replace the conventional dense attention computation. This innovation significantly improves the computational efficiency of multi-scale feature modeling. As shown in Figure 5, LAM effectively captures the global information of multimodal features while maintaining the robust learning capacity of the traditional Mamba framework in handling complex multi-modal data. Consequently, it enhances the network’s adaptability and efficiency in processing large-scale datasets.
The core formula of the LAM is expressed as follows:
Y = MLP X + Linear Attention Q , K , V
By introducing a linear attention mechanism, LAM enhances feature representation while reducing computational complexity and memory requirements. Compared to traditional architectures, LAM effectively captures long-range dependencies, improving the model’s ability to handle large-scale data. By incorporating residual connections and a multilayer perceptron (MLP), LAM significantly enriches feature representation and improves the model’s generalization capabilities, all while maintaining computational efficiency.

3.3. RA Module

One effective approach for extracting boundary details is the incorporation of attention mechanisms. However, conventional methods often overlook the importance of adaptively leveraging multi-scale features to capture boundary details. To address this, we propose a hybrid domain attention mechanism designed to capture discriminative features with varying feature correlations, enabling more flexible feature learning and expression across different layers and dimensions.
The proposed hybrid attention mechanism, referred to as the residual attention (RA) structure, is illustrated in Figure 6. The RA module primarily consists of multiple attention units stacked through residual connections. Each attention module captures different types of information, allowing the model to extensively harness diverse forms of attention. This results in features with greater discriminative power. The RA mechanism effectively mimics a bottom-up feedforward process combined with top-down attention feedback within a single feedforward pass.
The hybrid attention module on the left in Figure 6 comprises a residual attention network built by stacking multiple attention modules. Each attention module consists of two branches: the trunk branch and the mask branch, as shown on the right side of Figure 3. The trunk branch performs feature processing, while the mask branch utilizes bottom-up and top-down structures to learn a mask of the same size. Given an input (x), the trunk branch produces an output ( T ( x ) ), and the mask branch outputs M ( x ) , which serves as a soft weight for T ( x ) . The mask controls the gating unit of the trunk branch, with the output of a typical attention mechanism module represented by the following equation:
H i , c ( x ) = M i , c ( x ) T i , c ( x )
Here, i denotes the spatial position, and c represents the channel. However, simply stacking attention modules can lead to performance degradation. Inspired by the ResNet architecture, where the soft mask is constructed as an identity mapping, the performance of the attention mechanism improves. Therefore, the output of the proposed enhanced hybrid attention mechanism is expressed as follows:
H ˜ i , c x = 1 + M i , c x T i , c x
In this formulation, M ( x ) is constrained within the range of (0, 1], and F ( x ) represents the features generated by the network. When M ( x ) approaches 0, H ( x ) approximates F ( x ) . The stacked attention module benefits from its incremental nature, enabling residual attention learning. By stacking multiple hybrid domain attention modules, the network’s representational capacity is progressively enhanced, allowing it to capture abstract features and relationships within the data more effectively.
The progressive improvement in the handling of multi-scale information is crucial for tasks such as building boundary extraction, as structures vary in size and complexity. The hybrid attention mechanism improves the model’s ability to capture fine boundary details, enhancing performance in multi-scale structural analysis.

3.4. Loss Function

In brain tumor image segmentation tasks, class imbalance is a common issue. Typically, the number of voxels representing normal tissues far exceeds that of tumor voxels. For instance, voxels for necrotic and enhancing tumor regions may constitute only a small fraction of the dataset, while the majority belong to normal tissue or non-enhancing tumor tissue. This imbalance poses specific challenges for the development of models aimed at automatic tumor segmentation. To mitigate the predictive bias caused by class imbalance, we designed a hybrid loss function combining Cross-Entropy Loss (CEL) and Generalized Dice Loss (GDL) to train the brain tumor segmentation model. The total loss function is formulated as follows:
L = ( 1 λ ) L g + λ L c
where λ is a weight coefficient balancing the two loss functions. Cross-entropy loss is effective for pixel-level classification tasks, emphasizing the importance of accurately classifying each pixel. The generalized Dice loss extends the traditional Dice loss by considering the size differences among classes, ensuring fairness across all categories. The cross-entropy loss is computed as follows:
L c = 1 N i = 1 N c = 1 c g c i log ( p c i )
The generalized Dice loss is defined as follows:
L g = 1 2 c = 1 C ω c i = 1 N p c i g c i c = 1 C ω c i = 1 N ( p c i + g c i )
Here, C represents the total number of classes, N is the total number of pixels, p c i is the predicted probability of the i-th pixel belonging to the c-th class, and g c i is the ground-truth label for the i-th pixel in the c-th class. The weight ( ω c ) for the c-th class is typically set as the inverse frequency of the class to address the class imbalance issue effectively.

4. Experimental Results and Analysis

4.1. Dataset

The Brain Tumor Segmentation Challenge (BraTS Challenge) is one of the most prestigious and long-standing competitions organized by the Medical Image Computing and Computer-Assisted Intervention Society (MICCAI). Having been held annually for over a decade, it remains a cornerstone in the field of medical image processing. Each BraTS case includes four modalities of magnetic resonance imaging (MRI), with each modality having dimensions of 240 × 240 × 155 (L × W × H).
The four MRI modalities are described as follows:
T1: This modality is primarily used to observe anatomical structures, although lesions are not displayed as clearly.
T1ce: After injecting a contrast agent into the bloodstream, this modality highlights regions with active blood flow, serving as a critical indicator for enhancing tumors.
T2: This modality provides clearer visualization of lesions, aiding in the overall assessment of the tumor.
FLAIR: This modality highlights regions with high water content, which is particularly useful for identifying peritumoral edema.
For this study, we selected the BraTS 2020 [26] and BraTS 2023 [27] datasets.

4.2. Experimental Setup

4.2.1. Hardware Configuration

The entire experimental process was conducted on a system running Ubuntu 20.04. The hardware specifications include the following: an Intel i7-9700 processor, an NVIDIA RTX A5000 GPU with 24 GB of VRAM, and 32 GB of RAM.

4.2.2. Software Configuration

The experiments were implemented using the Python-3.9 programming language, with Anaconda employed for environment management. Visual Studio Code (VSCode-1.5) was chosen as the Integrated Development Environment (IDE). The key third-party libraries utilized in this study include the following: PyTorch1.18.0, a deep learning framework used for model training; SimpleITK-2.5.0, an open-source, cross-platform library for image processing, particularly for the reading of MRI image data; and NumPy1.22.4, which was used for numerical computations.

4.2.3. Experimental Settings

The Adam optimizer was used for training, with the batch size set to 8. The initial learning rate was configured at 0.001 and gradually decreased as the number of training epochs increased. The loss function combined Dice loss and cross-entropy loss to balance segmentation performance.

4.3. Evaluation Metrics

Evaluation metrics provide a quantitative method to assess and measure the performance of the segmentation approach. In this study, the segmentation effectiveness is evaluated both qualitatively and quantitatively.
Qualitative Evaluation: Qualitative evaluation involves visualizing the segmented brain tumor images to observe the segmentation quality. This assessment is primarily based on human visual perception, allowing for a subjective understanding of the segmentation performance.
Quantitative Evaluation: For quantitative evaluation, two specific metrics are employed: the Dice similarity coefficient (Dice) and the Hausdorff distance (HD).
The Dice coefficient measures the overlap between the predicted segmentation and the ground truth at the pixel or voxel level, reflecting the model’s accuracy in classifying pixels or voxels correctly. It is commonly used in evaluating medical image segmentation. The Dice coefficient ranges from 0 to 1, where a value closer to 1 indicates a higher accuracy and a value of 1 signifies a perfect match between the segmentation and the ground truth. The formula for calculating the Dice coefficient is expressed as follows:
D i c e = 2 | A B | ( | A | + | B | )
Here, A represents the predicted segmentation; B represents the actual segmentation; | A B | denotes the area of overlap between A and B; and | A | and | B | denote the areas of A and A, respectively.
The Hausdorff distance quantifies the maximum discrepancy between two sets of points and is widely used in medical image segmentation to measure the worst-case boundary deviation between predicted and ground-truth segmentations. For two point sets (A and B), the Hausdorff distance (HD) is defined as follows:
H D ( A , B ) = m a x h ( A , B ) , h ( B , A )
Here, h ( A , B ) denotes the directed Hausdorff distance from set A to set B, defined as follows:
h ( A , B ) = max a A min b B | a b |
This formulation captures the greatest distance from a point in set A to the closest point in set B and vice-versa for h ( A , B ) . Thus, the overall HD reflects the largest boundary discrepancy between the two segmentations.

4.4. Ablation Study

In this section, detailed ablation experiments are conducted to verify the effectiveness of each module in the proposed method. The results of the ablation study are shown in Table 1. We progressively removed the multimodal data processing module, the Mamba-like module, and the attention-like module, and evaluated the model’s performance using the Dice coefficient [28,29] and Hausdorff distance (HD).
Method 1: Without any multimodal data processing or attention mechanisms, the baseline model exhibited average performance, with a mean Dice of 77.98% a mean HD of 25.99%. This result indicates that a simple convolutional network cannot fully utilize multimodal information.
Method 2: After adding the TB-TMA module, the model’s performance improved, with the mean Dice increasing to 79.34% and the mean HD decreasing to 23.6%. The TB-TMA module effectively enhanced the extraction and fusion of multimodal features.
Method 3: Introducing the SSM module alone resulted in a mean Dice of 78.83% and mean HD of 25.15%. While the SSM module improved feature extraction to some extent, its effectiveness was slightly lower compared to the TB-TMA module.
Method 4: When both the TB-TMA and SSM modules were introduced simultaneously, the model’s mean Dice increased to 81.45%, and the mean HD decreased to 22.47%. This indicates that the synergistic effect of both modules significantly enhanced the model’s performance.
Method 5: Replacing the SSM module with the SegMamba module led to a further increase in the mean Dice to 82.65% and a decrease in the mean HD to 21.69%. The SegMamba module demonstrated stronger capabilities in multimodal feature modeling.
Method 6: Introducing the LAM module on top of the TB-TMA module, the mean Dice reached 83.67%, and the mean HD decreased to 20.81%. The LAM module significantly improved computational efficiency and feature modeling through linear attention mechanisms.
Method 7: Adding cross-attention on top of the LAM module further increased the mean Dice to 84.89% and reduced the mean HD to 19.79%. Cross-attention effectively enhanced the interaction between different modality features.
Method 8: Incorporating the TransBTS module resulted in a substantial improvement, with the mean Dice rising to 86.47% and the mean HD dramatically decreasing to 9.11%. The TransBTS module significantly enhanced segmentation performance through deeper feature extraction.
Method 9: Finally, employing the residual attention (RA) module yielded the highest mean Dice of 87.66% and the lowest mean HD of 8.92%. The RA module significantly improved the capture of boundary details through hybrid domain attention mechanisms.
The experimental results on the BraT2023 dataset were consistent with those on the BraT2020 dataset, as shown in Table 2. Although the specific values varied, the performance improvement trend for each module remained similar, further validating the robustness and effectiveness of the proposed method across different datasets.
To evaluate the performance improvement of each module, we conducted a series of ablation experiments on the BraTS brain tumor segmentation task, with visual results presented for each approach, as shown in Figure 7. The baseline model (BS) showed significant issues, such as missing tumor regions and inaccurate boundaries. After incorporating the TB-TMA module, the model effectively fused unimodal and cross-modal features through a dual-branch structure and cross-attention mechanism, enhancing the model’s cross-modal representation ability and resulting in more complete tumor segmentation. Further integration of the LAM module improved the model’s multi-scale feature modeling capability, leading to better identification of large tumor areas. Finally, the RA module, with its residual structure and attention stacking, refined boundary segmentation and significantly improved the precision of edge region localization. The combined results demonstrate that the synergistic effect of these modules significantly enhances both accuracy in segmenting tumor regions and boundary detail restoration.

4.5. Comparative Experiments

In this section, we present a detailed comparison of the proposed method with several mainstream brain tumor segmentation models. The experimental results are shown in Table 3. To provide a comprehensive analysis of the superiority of our approach, we compare its performance with that of other methods on the BraTS2020 dataset using two key metrics: the Dice coefficient and the Hausdorff distance (HD), covering three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). First, from a quantitative perspective, our method demonstrates significant performance improvements in terms of both the Dice coefficient and the Hausdorff distance (HD) metrics.
Specifically, for the whole tumor (WT) region, the Dice coefficient of our method reaches 92.47%, which represents an improvement of approximately 8.36% over 3D U-Net and surpassing V-Net and SegTransVAE by notable margins. This improvement highlights the enhanced precision of our method in capturing the entire tumor region. In the tumor core (TC) region, our method achieves a Dice coefficient of 87.19%, which marks a 15.8% improvement over V-Net and a 14.3% improvement over SegTransVAE. Particularly in the enhancing tumor (ET) region, our method performs exceptionally well, achieving a Dice coefficient of 81.32%, which is an increase of approximately 12.56% compared to 3D U-Net and around 31.7% higher than V-Net. This result demonstrates the significant advantage of our method in segmenting more challenging regions, especially the enhancing tumor area.
In addition to improvements in the Dice coefficient, the Hausdorff distance (HD) is another critical evaluation metric. Our method achieves a Hausdorff distance of 4.01 mm in the WT region, which is approximately 70% lower than 3D U-Net and 80% lower than V-Net, clearly showcasing the superior boundary precision of our approach. Similarly, in the TC region, our method’s HD is 5.18 mm, which represents a reduction of about 62% compared to 3D U-Net and 57% compared to V-Net. In the ET region, our method achieves a Hausdorff distance of 15.55 mm, which is about 69% lower than 3D U-Net and 67% lower than V-Net. These results indicate that our method significantly outperforms others in accurately capturing tumor boundaries, especially when handling complex boundaries.
Overall, the proposed method outperforms existing mainstream approaches in terms of both the Dice coefficient and the Hausdorff distance across multiple tumor regions, with particularly notable performance improvements in the enhancing tumor region. Compared to traditional models like 3D U-Net and V-Net, our method achieves substantial advancements in segmentation accuracy and boundary precision, which are crucial for tumor localization and treatment in clinical applications. These results fully demonstrate the potential and advantages of our approach in brain tumor segmentation tasks.
Table 4 presents a comparison of different brain tumor segmentation methods on the BraTS2023 dataset, evaluating performance using the Dice coefficient and Hausdorff distance (HD) for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) regions. The proposed method outperforms other approaches, achieving a Dice coefficient of 88.07%, which is notably higher than that of TransBTSv2. Specifically, it excels in the TC (88.5%) and ET (82.5%) regions. Our method achieves an HD of 3.8 mm for WT and 13.8mm for ET, both lower than TransBTSv2’s 4 mm and 15 mm, respectively, indicating superior boundary precision. Overall, our method demonstrates significant improvements in segmentation accuracy and boundary fitting, particularly in the ET region, outperforming current state-of-the-art methods and showcasing its potential in brain tumor segmentation.
Table 5 presents a comprehensive comparison of various brain tumor segmentation methods on the BraTS2020 dataset, focusing on both segmentation performance and model complexity. Specifically, it includes two primary performance metrics: the Dice coefficient and Hausdorff Distance (HD), which assess segmentation accuracy and boundary precision, respectively. In addition, two complexity metrics, namely floating-point operations (FLOPs, in GigaFLOPs) and the number of parameters (in millions),are reported to evaluate the computational burden and model size, respectively. This enables a multi-dimensional analysis of each method’s trade-off between accuracy and efficiency.
In terms of segmentation performance, the three variants of our proposed method—Ours (TB-TMA), Ours (TB-TMA + LAM), and Ours (TB-TMA + LAM + RA)—consistently achieve competitive accuracy while maintaining low computational complexity. The full model, Ours (TB-TMA + LAM + RA), attains a Dice score of 87.66% and an HD of 8.92 mm, ranking highest among all compared methods. Compared with the advanced lightweight model, TransBTSv2, our method improves segmentation accuracy by 2.76% and boundary accuracy by 10.17% while simultaneously reducing FLOPs by 24.5% and the parameter count by 29.3%, highlighting the significant advantage in terms of model compactness.
Furthermore, the progressive enhancement from the baseline model (TB-TMA) to the full version through the integration of the LAM and RA modules leads to consistent performance gains. Specifically, the Dice score improves from 79.34% to 83.67% and, finally, 87.66%, while the HD decreases from 19.37 mm to 13.28 mm, then to 8.92 mm. These improvements validate the effectiveness of the proposed modules, demonstrating their critical roles in enhancing both segmentation accuracy and boundary delineation.
From the perspective of model complexity, although several Transformer-based architectures (e.g., UNETR and 3DUXNET) achieve good segmentation performance, they suffer from substantially higher computational costs. For instance, UNETR requires 58.7 GFLOPs and over 100 million parameters, which poses significant challenges for deployment in real-world clinical environments. In contrast, our full model requires only 1.42 GFLOPs and 10.83 million parameters, greatly reducing the dependency on computational resources and showcasing strong practicality and deployability.
In summary, the proposed method demonstrates superior performance across multiple dimensions. It achieves high segmentation accuracy with significantly reduced model complexity. The validated effectiveness of the proposed modules, coupled with the lightweight nature of the overall architecture, makes our approach highly promising for real-world clinical applications.

4.6. Algorithm Visualization

Figure 8 demonstrates the segmentation performance of our proposed algorithm on the BraTS2020 dataset for medical tumor images. Each set of images, from left to right, includes the original image, ground-truth annotations, and the model’s predicted results. To visually distinguish the segmentation outcomes, we used different colors to label various tumor regions: green for the whole tumor (WT), yellow for the tumor core (TC), and red for the enhancing tumor (ET). Furthermore, red dashed circles are used to highlight regions where the segmentation results between different methods exhibit noticeable differences. These marked areas help emphasize the effectiveness of our method in handling challenging and irregular tumor shapes. The visualizations clearly show that our method can accurately identify and segment different types of tumor regions, closely matching the ground-truth annotations, which highlights its superior segmentation performance. Such detailed segmentation is crucial for clinical diagnosis and treatment planning.
The comparative visualization below illustrates the segmentation results of our proposed algorithm alongside several mainstream methods on the BraTS2020 dataset. Specifically, from left to right, the images show the original image, ground truth, results from 3D U-Net, V-Net, SegResNet, SwinUNETR, TransBTSv2, and our proposed method. Our algorithm excels in capturing tumor boundary details, especially in complex regions with irregular shapes or fuzzy boundaries. In contrast, other methods may exhibit mis-segmentations or blurred boundaries in some challenging areas.
For the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) regions, our algorithm consistently delivers high segmentation accuracy across different images. Notably, in the segmentation of the enhancing tumor region (red), our method shows exceptional performance in capturing internal lesions, which is critical for clinical diagnosis. Other methods occasionally struggle with under-segmentation or over-segmentation in this region. The comparative results across different image samples demonstrate that our method maintains effective and stable segmentation across various tumor morphologies. This indicates strong generalization capability, making our algorithm well-suited for the handling of diverse and complex scenarios in real-world applications.

5. Conclusions

This paper introduces an advanced multimodal deep learning framework for brain tumor segmentation, integrating the two-branch two-model attention (TB-TMA); linear attention Mamba (LAM); and residual attention (RA) modules to enhance multimodal feature extraction, attention modeling, and boundary refinement. Experimental results on the BraTS2020 and BraTS2023 datasets show that our method achieves Dice scores of 87.66% and 88.07%, respectively, and significantly reduces the Hausdorff distance to as low as 7.4mm, on average, outperforming methods such as TransBTSv2. Although the hybrid network proposed in this paper has achieved remarkable results in multimodal brain tumor segmentation, it still has some shortcomings. For some complex or irregularly shaped tumors, there is still room for further improvement in segmentation accuracy. In addition, existing models still face challenges in terms of computational efficiency and reasoning speed when processing large-scale medical images. In future work, we plan to further optimize the existing methods and improve the expression and learning efficiency of the network by introducing more advanced network architectures. We will also explore adaptive multi-scale feature modeling methods to better address the diversity of tumor morphology. In addition, we hope to be able to extend the model to more image modes, such as PET and CT scans, to improve the applicability of the model in different clinical settings.

Author Contributions

Conceptualization, G.Z.; methodology, G.Z.; software, G.Z.; validation, X.L., H.Z., and C.Z.; investigation, G.Z.; resources, W.Z.; writing—original draft preparation, G.Z.; writing—review and editing, H.Z.; visualization, G.W.; supervision, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Verma, A.; Shivhare, S.N.; Singh, S.P.; Kumar, N.; Nayyar, A. Comprehensive review on MRI-based brain tumor segmentation: A comparative study from 2017 onwards. Arch. Comput. Methods Eng. 2024, 31, 4805–4851. [Google Scholar] [CrossRef]
  2. Akter, A.; Nosheen, N.; Ahmed, S.; Hossain, M.; Yousuf, M.A.; Almoyad, M.A.A.; Hasan, K.F.; Moni, M.A. Robust clinical applicable CNN and U-Net based algorithm for MRI classification and segmentation for brain tumor. Expert Syst. Appl. 2024, 238, 122347. [Google Scholar] [CrossRef]
  3. Rasa, S.M.; Islam, M.M.; Talukder, M.A.; Uddin, M.A.; Khalid, M.; Kazi, M.; Kazi, M.Z. Brain tumor classification using fine-tuned transfer learning models on magnetic resonance imaging (MRI) images. Digital Health 2024, 10, 20552076241286140. [Google Scholar] [CrossRef] [PubMed]
  4. Kazerooni, A.F.; Khalili, N.; Liu, X.; Haldar, D.; Jiang, Z.; Anwar, S.M.; Albrecht, J.; Adewole, M.; Anazodo, U.; Anderson, H.; et al. The brain tumor segmentation (BraTS) challenge 2023: Focus on pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). arXiv 2024, arXiv:2305.17033 v7. [Google Scholar]
  5. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  6. Liu, Z.; Tong, L.; Chen, L.; Jiang, Z.; Zhou, F.; Zhang, Q.; Zhang, X.; Jin, Y.; Zhou, H. Deep learning based brain tumor segmentation: A survey. Complex Intell. Syst. 2023, 9, 1001–1026. [Google Scholar] [CrossRef]
  7. Zhao, X.; Wu, Y.; Song, G.; Li, Z.; Zhang, Y.; Fan, Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med. Image Anal. 2018, 43, 98–111. [Google Scholar] [CrossRef] [PubMed]
  8. Mehrtash, A.; Wells, W.M.; Tempany, C.M.; Abolmaesumi, P.; Kapur, T. Confidence calibration and predictive uncertainty estimation for deep medical image segmentation. IEEE Trans. Med. Imaging 2020, 39, 3868–3878. [Google Scholar] [CrossRef] [PubMed]
  9. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016; Proceedings, Part II 19. Springer: Cham, Switzerland, 2016; pp. 424–432. [Google Scholar]
  10. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 565–571. [Google Scholar]
  11. Ranjbarzadeh, R.; Zarbakhsh, P.; Caputo, A.; Tirkolaee, E.B.; Bendechache, M. Brain tumor segmentation based on optimized convolutional neural network and improved chimp optimization algorithm. Comput. Biol. Med. 2024, 168, 107723. [Google Scholar] [CrossRef] [PubMed]
  12. Peng, Y.; Sun, J. The multimodal MRI brain tumor segmentation based on AD-Net. Biomed. Signal Process. Control 2023, 80, 104336. [Google Scholar] [CrossRef]
  13. Ruba, T.; Tamilselvi, R.; Beham, M.P. Brain tumor segmentation in multimodal MRI images using novel LSIS operator and deep learning. J. Ambient Intell. Humaniz. Comput. 2023, 14, 13163–13177. [Google Scholar] [CrossRef]
  14. Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 574–584. [Google Scholar]
  15. Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Workshop; Springer: Cham, Switzerland, 2021; pp. 272–284. [Google Scholar]
  16. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
  17. Li, J.; Wang, W.; Chen, C.; Zhang, T.; Zha, S.; Wang, J.; Yu, H. TransBTSV2: Towards better and more efficient volumetric segmentation of medical images. arXiv 2022, arXiv:2201.12785. [Google Scholar] [CrossRef]
  18. Peiris, H.; Hayat, M.; Chen, Z.; Egan, G.; Harandi, M. A robust volumetric transformer for accurate 3D tumor segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2022; pp. 162–172. [Google Scholar]
  19. Zhang, J.; Zhang, S.; Shen, X.; Lukasiewicz, T.; Xu, Z. Multi-ConDoS: Multimodal contrastive domain sharing generative adversarial networks for self-supervised medical image segmentation. IEEE Trans. Med. Imaging 2023, 43, 76–95. [Google Scholar] [CrossRef] [PubMed]
  20. Marinov, Z.; Reiß, S.; Kersting, D.; Kleesiek, J.; Stiefelhagen, R. Mirror u-net: Marrying multimodal fission with multi-task learning for semantic segmentation in medical imaging. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 2283–2293. [Google Scholar]
  21. Pandey, S.; Chen, K.F.; Dam, E.B. Comprehensive multimodal segmentation in medical imaging: Combining yolov8 with sam and hq-sam models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 2592–2598. [Google Scholar]
  22. Andrade-Miranda, G.; Jaouen, V.; Tankyevych, O.; Le Rest, C.C.; Visvikis, D.; Conze, P.H. Multi-modal medical Transformers: A meta-analysis for medical image segmentation in oncology. Comput. Med. Imaging Graph. 2023, 110, 102308. [Google Scholar] [CrossRef] [PubMed]
  23. Huo, Y.; Liu, J.; Xu, Z.; Harrigan, R.L.; Assad, A.; Abramson, R.G.; Landman, B.A. Robust multicontrast MRI spleen segmentation for splenomegaly using multi-atlas segmentation. IEEE Trans. Biomed. Eng. 2017, 65, 336–343. [Google Scholar] [CrossRef] [PubMed]
  24. Tan, M.; Le, Q. Efficientnetv2: Smaller models and faster training. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; pp. 10096–10106. [Google Scholar]
  25. Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar] [CrossRef]
  26. Baid, U.; Ghodasara, S.; Mohan, S.; Bilello, M.; Calabrese, E.; Colak, E.; Farahani, K.; Kalpathy-Cramer, J.; Kitamura, F.C.; Pati, S.; et al. The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv 2021, arXiv:2107.02314. [Google Scholar]
  27. Karargyris, A.; Umeton, R.; Sheller, M.J.; Aristizabal, A.; George, J.; Wuest, A.; Pati, S.; Kassem, H.; Zenk, M.; Baid, U.; et al. Federated benchmarking of medical artificial intelligence with MedPerf. Nat. Mach. Intell. 2023, 5, 799–810. [Google Scholar] [CrossRef] [PubMed]
  28. Tan, L.; Chen, X.; Hu, X.; Tang, T. Dmdsnet: A computer vision-based dual multi-task model for tunnel bolt detection and corrosion segmentation. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 24–28 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 4827–4833. [Google Scholar]
  29. Tan, L.; Hu, X.; Tang, T.; Yuan, D. A lightweight metro tunnel water leakage identification algorithm via machine vision. Eng. Fail. Anal. 2023, 150, 107327. [Google Scholar] [CrossRef]
  30. Pham, Q.D.; Nguyen-Truong, H.; Phuong, N.N.; Nguyen, K.N.; Nguyen, C.D.; Bui, T.; Truong, S.Q. Segtransvae: Hybrid cnn-transformer with regularization for medical image segmentation. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar]
  31. Myronenko, A.; Siddiquee, M.M.R.; Yang, D.; He, Y.; Xu, D. Automated head and neck tumor segmentation from 3D PET/CT HECKTOR 2022 challenge report. In 3D Head and Neck Tumor Segmentation in PET/CT Challenge; Springer: Cham, Switzerland, 2022; pp. 31–37. [Google Scholar]
  32. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
  33. Lee, H.H.; Bao, S.; Huo, Y.; Landman, B.A. 3d ux-net: A large kernel volumetric convnet modernizing hierarchical transformer for medical image segmentation. arXiv 2022, arXiv:2209.15076. [Google Scholar]
  34. Wenxuan, W.; Chen, C.; Meng, D.; Hong, Y.; Sen, Z.; Jiangyun, L. Transbts: Multimodal brain tumor segmentation using transformer. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2021; pp. 109–119. [Google Scholar]
Figure 1. Multimodal morphology of medical images of brain tumors.
Figure 1. Multimodal morphology of medical images of brain tumors.
Sensors 25 04740 g001
Figure 2. Structural diagram of the Hybrid Network for Multimodal Brain Tumor Segmentation algorithm.
Figure 2. Structural diagram of the Hybrid Network for Multimodal Brain Tumor Segmentation algorithm.
Sensors 25 04740 g002
Figure 3. Structural diagram of the TB-TMA module.
Figure 3. Structural diagram of the TB-TMA module.
Sensors 25 04740 g003
Figure 4. Structural diagram of the conv block.
Figure 4. Structural diagram of the conv block.
Sensors 25 04740 g004
Figure 5. LAM structure diagram.
Figure 5. LAM structure diagram.
Sensors 25 04740 g005
Figure 6. Structural diagram of RA.
Figure 6. Structural diagram of RA.
Sensors 25 04740 g006
Figure 7. Visual comparison of ablation results on the BraTS brain tumor segmentation task. Each row represents a sample, showing the following (from left to right): the original image; ground-truth mask; baseline (BS) output; BS with TB-TMA module; BS with TB-TMA and LAM modules; and the final model with TB-TMA, LAM, and RA modules. The progressive improvements demonstrate the effectiveness of each proposed component in enhancing segmentation completeness and boundary accuracy. In the Mask column, the red regions represent the true tumor segmentation. In all predicted results, the blue areas indicate the predicted segmentation, while the red contour lines overlay the ground-truth boundaries for comparison.
Figure 7. Visual comparison of ablation results on the BraTS brain tumor segmentation task. Each row represents a sample, showing the following (from left to right): the original image; ground-truth mask; baseline (BS) output; BS with TB-TMA module; BS with TB-TMA and LAM modules; and the final model with TB-TMA, LAM, and RA modules. The progressive improvements demonstrate the effectiveness of each proposed component in enhancing segmentation completeness and boundary accuracy. In the Mask column, the red regions represent the true tumor segmentation. In all predicted results, the blue areas indicate the predicted segmentation, while the red contour lines overlay the ground-truth boundaries for comparison.
Sensors 25 04740 g007
Figure 8. Performance comparison of different methods on the BraTS2020 dataset. Red dashed circles indicate regions with significant segmentation differences among methods, highlighting the effectiveness of the proposed approach in challenging areas. The segmentation results use different colors to represent tumor regions: green for the whole tumor (WT), yellow for the tumor core (TC), and red for the enhancing tumor (ET).
Figure 8. Performance comparison of different methods on the BraTS2020 dataset. Red dashed circles indicate regions with significant segmentation differences among methods, highlighting the effectiveness of the proposed approach in challenging areas. The segmentation results use different colors to represent tumor regions: green for the whole tumor (WT), yellow for the tumor core (TC), and red for the enhancing tumor (ET).
Sensors 25 04740 g008
Table 1. Performance comparison of different methods (BraT2020 dataset). Dice coefficient indicates segmentation accuracy (higher is better), and HD (Hausdorff distance) indicates boundary alignment (lower is better).
Table 1. Performance comparison of different methods (BraT2020 dataset). Dice coefficient indicates segmentation accuracy (higher is better), and HD (Hausdorff distance) indicates boundary alignment (lower is better).
MethodMultimodal Data ProcessingMamba-Like ModuleAttention-Like ModuleDice (%)HD (%)
WT TC ET Mean WT TC ET Mean
1×××84.1179.0668.7677.9813.36613.60750.98325.99
2TB-TMA××86.4581.2370.3479.3412.47812.10346.21323.6
3×SSM×85.2280.1769.1178.8313.13212.98449.34525.15
4TB-TMASSM×88.3783.2972.6881.4511.20411.45644.76222.47
5TB-TMASegMamba×89.5684.5173.8982.6510.73910.80343.52621.69
6TB-TMALAM×90.2385.6775.1283.679.78610.29442.35120.81
7TB-TMALAMCross-Attention91.1586.4977.0484.898.7659.83240.76519.79
8TB-TMALAMTransBTS92.0187.0580.3486.474.6855.65816.9759.11
9TB-TMALAMRA92.4787.1981.3287.664.0135.18415.558.92
Note: “×” denotes that the corresponding module is not used in this configuration.
Table 2. Ablation experiment (BraT2023 dataset).
Table 2. Ablation experiment (BraT2023 dataset).
MethodMultimodal Data ProcessingMamba-Like ModuleAttention-Like ModuleDice(%)HD(%)
WT TC ET Mean WT TC ET Mean
1×××85.2380.3469.8278.4612.65412.90548.76924.77
2TB-TMA××87.5882.4171.4580.4811.93211.67345.12722.91
3×SSM×86.4781.2970.2379.3312.63512.49347.45824.2
4TB-TMASSM×89.7284.7673.5282.6710.67910.92243.24521.62
5TB-TMASegMamba×90.8985.8974.7583.8410.19410.36742.08120.88
6TB-TMALAM×91.5786.9476.0284.849.2799.84940.92720.02
7TB-TMALAMCross-Attention92.4387.7278.2486.138.3059.35139.42719.03
8TB-TMALAMTransBTS93.3188.2881.1587.584.2955.13815.8418.43
9TB-TMALAMRA93.7888.4382.1288.113.6794.67914.7217.69
Note: “×” denotes that the corresponding module is not used in this configuration.
Table 3. Performance comparison of different methods (BraT2020 dataset).
Table 3. Performance comparison of different methods (BraT2020 dataset).
MethodDice (%)HD (mm)
WT TC ET Mean WT TC ET Mean
3D Unet [9]84.1179.0668.7677.3113.36613.60750.98325.99
V-Net [10]84.6375.2661.7973.8920.40712.17547.70226.76
SegTransVAE [30]85.576.36475.2719.51245.525.67
SegResNet [31]86.178.568.277.612.310.840.521.2
Attention Unet [32]87.580.670.979.6710.29.635.418.4
UNETR [14]888172.580.59.58.933.817.4
SwinUNETR [15]8982.57582.177.87.530.615.3
3DUXNET [33]89.583.276.883.176.56.825.712.33
TransBTS [34]90.0981.7378.7383.524.9649.76917.94710.89
TransBTSv2 [17]90.5684.579.6384.94.2725.5617.9479.93
Our method92.4787.1981.3287.664.0135.18415.558.92
Table 4. Performance comparison of different methods (BraT2023 dataset).
Table 4. Performance comparison of different methods (BraT2023 dataset).
MethodDice (%)HD (mm)
WT TC ET Mean WT TC ET Mean
3D Unet85.580.570.278.7312.512.948.524.63
V-Net85.977.864.576.731911.545.825.43
SegTransVAE86.879.267.877.9318.211.24324.13
SegResNet87.580.97079.4711.81038.220.67
Attention Unet88.982.372.581.239.88.732.717.73
UNETR89.58374.382.938.98.230.415.83
SwinUNETR90.58476.583.677.26.827.213.73
3DUXNET918578.384.775.85.623.511.63
TransBTS91.383.579.584.774.8915.99.9
TransBTSv291.8868085.9345158
Our method93.288.582.588.073.84.613.87.4
Table 5. Model complexity comparison on the BraT2020 dataset.
Table 5. Model complexity comparison on the BraT2020 dataset.
MethodDiceHDFLOPs (G)Params (M)
3D Unet77.3125.9913.0416.21
V-Net73.8926.765.8545.61
SegTransVAE75.2725.677.4144.7
SegResNet77.621.23.1813.5
Attention Unet79.6718.418.323.8
UNETR80.517.458.7101
SwinUNETR82.1715.31.9627.15
3DUXNET83.1712.3322.131.5
TransBTS83.5210.892.632.99
TransBTSv284.99.931.8815.3
Ours (TB-TMA)79.3419.371.088.91
Ours (TB-TMA + LAM)83.6713.281.229.38
Ours (TB-TMA + LAM + RA)87.668.921.4210.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, G.; Li, X.; Zeng, H.; Zhang, C.; Wu, G.; Zhao, W. Leveraging Prior Knowledge in a Hybrid Network for Multimodal Brain Tumor Segmentation. Sensors 2025, 25, 4740. https://doi.org/10.3390/s25154740

AMA Style

Zhou G, Li X, Zeng H, Zhang C, Wu G, Zhao W. Leveraging Prior Knowledge in a Hybrid Network for Multimodal Brain Tumor Segmentation. Sensors. 2025; 25(15):4740. https://doi.org/10.3390/s25154740

Chicago/Turabian Style

Zhou, Gangyi, Xiaowei Li, Hongran Zeng, Chongyang Zhang, Guohang Wu, and Wuxiang Zhao. 2025. "Leveraging Prior Knowledge in a Hybrid Network for Multimodal Brain Tumor Segmentation" Sensors 25, no. 15: 4740. https://doi.org/10.3390/s25154740

APA Style

Zhou, G., Li, X., Zeng, H., Zhang, C., Wu, G., & Zhao, W. (2025). Leveraging Prior Knowledge in a Hybrid Network for Multimodal Brain Tumor Segmentation. Sensors, 25(15), 4740. https://doi.org/10.3390/s25154740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop