Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (134)

Search Parameters:
Keywords = BraTS

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5882 KiB  
Article
Leveraging Prior Knowledge in a Hybrid Network for Multimodal Brain Tumor Segmentation
by Gangyi Zhou, Xiaowei Li, Hongran Zeng, Chongyang Zhang, Guohang Wu and Wuxiang Zhao
Sensors 2025, 25(15), 4740; https://doi.org/10.3390/s25154740 (registering DOI) - 1 Aug 2025
Viewed by 170
Abstract
Recent advancements in deep learning have significantly enhanced brain tumor segmentation from MRI data, providing valuable support for clinical diagnosis and treatment planning. However, challenges persist in effectively integrating prior medical knowledge, capturing global multimodal features, and accurately delineating tumor boundaries. To address [...] Read more.
Recent advancements in deep learning have significantly enhanced brain tumor segmentation from MRI data, providing valuable support for clinical diagnosis and treatment planning. However, challenges persist in effectively integrating prior medical knowledge, capturing global multimodal features, and accurately delineating tumor boundaries. To address these challenges, the Hybrid Network for Multimodal Brain Tumor Segmentation (HN-MBTS) is proposed, which incorporates prior medical knowledge to refine feature extraction and boundary precision. Key innovations include the Two-Branch, Two-Model Attention (TB-TMA) module for efficient multimodal feature fusion, the Linear Attention Mamba (LAM) module for robust multi-scale feature modeling, and the Residual Attention (RA) module for enhanced boundary refinement. Experimental results demonstrate that this method significantly outperforms existing approaches. On the BraT2020 and BraT2023 datasets, the method achieved average Dice scores of 87.66% and 88.07%, respectively. These results confirm the superior segmentation accuracy and efficiency of the approach, highlighting its potential to provide valuable assistance in clinical settings. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 3069 KiB  
Article
Enhanced Segmentation of Glioma Subregions via Modality-Aware Encoding and Channel-Wise Attention in Multimodal MRI
by Annachiara Cariola, Elena Sibilano, Antonio Brunetti, Domenico Buongiorno, Andrea Guerriero and Vitoantonio Bevilacqua
Appl. Sci. 2025, 15(14), 8061; https://doi.org/10.3390/app15148061 - 20 Jul 2025
Viewed by 398
Abstract
Accurate segmentation of key tumor subregions in adult gliomas from Magnetic Resonance Imaging (MRI) is of critical importance for brain tumor diagnosis, treatment planning, and prognosis. However, this task remains poorly investigated and highly challenging due to the considerable variability in shape and [...] Read more.
Accurate segmentation of key tumor subregions in adult gliomas from Magnetic Resonance Imaging (MRI) is of critical importance for brain tumor diagnosis, treatment planning, and prognosis. However, this task remains poorly investigated and highly challenging due to the considerable variability in shape and appearance of these areas across patients. This study proposes a novel Deep Learning architecture leveraging modality-specific encoding and attention-based refinement for the segmentation of glioma subregions, including peritumoral edema (ED), necrotic core (NCR), and enhancing tissue (ET). The model is trained and validated on the Brain Tumor Segmentation (BraTS) 2023 challenge dataset and benchmarked against a state-of-the-art transformer-based approach. Our architecture achieves promising results, with Dice scores of 0.78, 0.86, and 0.88 for NCR, ED, and ET, respectively, outperforming SegFormer3D while maintaining comparable model complexity. To ensure a comprehensive evaluation, performance was also assessed on standard composite tumor regions, i.e., tumor core (TC) and whole tumor (WT). The statistically significant improvements obtained on all regions highlight the effectiveness of integrating complementary modality-specific information and applying channel-wise feature recalibration in the proposed model. Full article
(This article belongs to the Special Issue The Role of Artificial Intelligence Technologies in Health)
Show Figures

Figure 1

20 pages, 1606 KiB  
Article
Brain Tumour Segmentation Using Choquet Integrals and Coalition Game
by Makhlouf Derdour, Mohammed El Bachir Yahiaoui, Moustafa Sadek Kahil, Mohamed Gasmi and Mohamed Chahine Ghanem
Information 2025, 16(7), 615; https://doi.org/10.3390/info16070615 - 17 Jul 2025
Viewed by 248
Abstract
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating [...] Read more.
Artificial Intelligence (AI) and computer-aided diagnosis (CAD) have revolutionised various aspects of modern life, particularly in the medical domain. These technologies enable efficient solutions for complex challenges, such as accurately segmenting brain tumour regions, which significantly aid medical professionals in monitoring and treating patients. This research focuses on segmenting glioma brain tumour lesions in MRI images by analysing them at the pixel level. The aim is to develop a deep learning-based approach that enables ensemble learning to achieve precise and consistent segmentation of brain tumours. While many studies have explored ensemble learning techniques in this area, most rely on aggregation functions like the Weighted Arithmetic Mean (WAM) without accounting for the interdependencies between classifier subsets. To address this limitation, the Choquet integral is employed for ensemble learning, along with a novel evaluation framework for fuzzy measures. This framework integrates coalition game theory, information theory, and Lambda fuzzy approximation. Three distinct fuzzy measure sets are computed using different weighting strategies informed by these theories. Based on these measures, three Choquet integrals are calculated for segmenting different components of brain lesions, and their outputs are subsequently combined. The BraTS-2020 online validation dataset is used to validate the proposed approach. Results demonstrate superior performance compared with several recent methods, achieving Dice Similarity Coefficients of 0.896, 0.851, and 0.792 and 95% Hausdorff distances of 5.96 mm, 6.65 mm, and 20.74 mm for the whole tumour, tumour core, and enhancing tumour core, respectively. Full article
Show Figures

Figure 1

19 pages, 8503 KiB  
Article
The Teacher–Assistant–Student Collaborative and Competitive Network for Brain Tumor Segmentation with Missing Modalities
by Junjie Wang, Huanlan Kang and Tao Liu
Diagnostics 2025, 15(12), 1552; https://doi.org/10.3390/diagnostics15121552 - 18 Jun 2025
Viewed by 473
Abstract
Background: Magnetic Resonance Imaging (MRI) provides rich tumor information through different imaging modalities (T1, T1ce, T2, and FLAIR). Each modality offers distinct contrast and tissue characteristics, which help in the more comprehensive identification and analysis of tumor lesions. However, in clinical practice, [...] Read more.
Background: Magnetic Resonance Imaging (MRI) provides rich tumor information through different imaging modalities (T1, T1ce, T2, and FLAIR). Each modality offers distinct contrast and tissue characteristics, which help in the more comprehensive identification and analysis of tumor lesions. However, in clinical practice, only a single modality of medical imaging is available due to various factors such as imaging equipment. The performance of existing methods is significantly hindered when handling incomplete modality data. Methods: A Teacher–Assistant–Student Collaborative and Competitive Net (TASCCNet) is proposed, which is based on traditional knowledge distillation techniques. First, a Multihead Mixture of Experts (MHMoE) module is developed with multiple experts and multiple gated networks to enhance information from fused modalities. Second, a competitive function is formulated to promote collaboration and competition between the student network and the teacher network. Additionally, we introduce an assistant module inspired by human visual mechanisms to provide supplementary structural knowledge, which enriches the information available to the student and facilitates a dynamic teacher–assistant collaboration. Results: The proposed model (TASCCNet) is evaluated on the BraTS 2018 and BraTS 2021 datasets and demonstrates robust performance even when only a single modality is available. Conclusions: TASCCNet successfully addresses the challenge of incomplete modality data in brain tumor segmentation by leveraging collaborative knowledge distillation and competitive learning mechanisms. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

20 pages, 2503 KiB  
Article
Lightweight Brain Tumor Segmentation Through Wavelet-Guided Iterative Axial Factorization Attention
by Yueyang Zhong, Shuyi Wang, Yuqing Miao, Tao Zhang and Haoliang Li
Brain Sci. 2025, 15(6), 613; https://doi.org/10.3390/brainsci15060613 - 6 Jun 2025
Viewed by 769
Abstract
Background/Objectives: The accurate and efficient segmentation of brain tumors from 3D MRI data remains a significant challenge in medical imaging. Conventional deep learning methods, such as convolutional neural networks and transformer-based models, frequently introduce significant computational overhead or fail to effectively represent multi-scale [...] Read more.
Background/Objectives: The accurate and efficient segmentation of brain tumors from 3D MRI data remains a significant challenge in medical imaging. Conventional deep learning methods, such as convolutional neural networks and transformer-based models, frequently introduce significant computational overhead or fail to effectively represent multi-scale features. Methods: This paper presents a lightweight deep learning framework that uses adaptive discrete wavelet decomposition and iterative axial attention to improve 3D brain tumor segmentation. The wavelet decomposition module effectively captures multi-scale information by breaking it down into frequency sub-bands, thereby the mitigating detail loss often associated with standard downsampling methods. Ablation studies confirm that this module enhances segmentation accuracy, particularly in preserving the finer structural details of tumor components. Simultaneously, the iterative axial factorization attention reduces the computational burden of 3D spatial modeling by processing attention sequentially along individual axes, preserving long-range interdependence while consuming minimal resources. Results: Our model performs well on the BraTS2020 and FeTS2022 datasets with average Dice scores of 85.0% and 88.1%, with our competitive results using only 5.23 million parameters and 9.75 GFLOPs. In comparison to state-of-the-art methods, it effectively balances accuracy and efficiency, making it suitable for resource-constrained clinical applications. Conclusions: This study underscores the advantages of integrating frequency-domain analysis with optimized attention mechanisms, paving the way for scalable, high-performance medical image segmentation algorithms with broader clinical diagnostic applications. Full article
(This article belongs to the Section Neuro-oncology)
Show Figures

Figure 1

21 pages, 4793 KiB  
Article
Deep Learning for Glioblastoma Multiforme Detection from MRI: A Statistical Analysis for Demographic Bias
by Kebin Contreras, Julio Gutierrez-Rengifo, Oscar Casanova-Carvajal, Angel Luis Alvarez, Patricia E. Vélez-Varela and Ana Lorena Urbano-Bojorge
Appl. Sci. 2025, 15(11), 6274; https://doi.org/10.3390/app15116274 - 3 Jun 2025
Viewed by 628
Abstract
Glioblastoma, IDH-wildtype (GBM), is the most aggressive and complex brain tumour classified by the World Health Organization (WHO), characterised by high mortality rates and diagnostic limitations inherent to invasive conventional procedures. Early detection is essential for improving patient outcomes, underscoring the need for [...] Read more.
Glioblastoma, IDH-wildtype (GBM), is the most aggressive and complex brain tumour classified by the World Health Organization (WHO), characterised by high mortality rates and diagnostic limitations inherent to invasive conventional procedures. Early detection is essential for improving patient outcomes, underscoring the need for non-invasive diagnostic tools. This study presents a convolutional neural network (CNN) specifically optimised for GBM detection from T1-weighted magnetic resonance imaging (MRI), with systematic evaluations of layer depth, activation functions, and hyperparameters. The model was trained on the RSNA-MICCAI data set and externally validated on the Erasmus Glioma Database (EGD), which includes gliomas of various grades and preserves cranial structures, unlike the skull-stripped RSNA-MICCAI images. This morphological discrepancy demonstrates the generalisation capacity of the model across anatomical and acquisition differences, achieving an F1-score of 0.88. Furthermore, statistical tests, such as Shapiro–Wilk, Mann–Whitney U, and Chi-square, confirmed the absence of demographic bias in model predictions, based on p-values, confidence intervals, and statistical power analyses supporting its demographic fairness. The proposed model achieved an area under the curve–receiver operating characteristic (AUC-ROC) of 0.63 on the RSNA-MICCAI test set, surpassing all prior results submitted to the BraTS 2021 challenge, and establishing a reliable and generalisable approach for non-invasive GBM detection. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

17 pages, 2718 KiB  
Article
Semantic Segmentation of Brain Tumors Using a Local–Global Attention Model
by Shuli Xing, Zhenwei Lai, Junxiong Zhu, Wenwu He and Guojun Mao
Appl. Sci. 2025, 15(11), 5981; https://doi.org/10.3390/app15115981 - 26 May 2025
Viewed by 680
Abstract
The distinctions between tumor areas and surrounding tissues are often subtle. Additionally, the morphology and size of tumors can vary significantly among different patients. These factors pose considerable challenges for the precise segmentation of tumors and subsequent diagnosis. Therefore, researchers are trying to [...] Read more.
The distinctions between tumor areas and surrounding tissues are often subtle. Additionally, the morphology and size of tumors can vary significantly among different patients. These factors pose considerable challenges for the precise segmentation of tumors and subsequent diagnosis. Therefore, researchers are trying to develop an automated and accurate segmentation model. Currently, many segmentation models in deep learning rely on Convolutional Neural Network or Vision Transformer. However, Convolution-based models often fail to deliver precise segmentation results, while Transformer-based models often require more computational resources. To address these challenges, we propose a novel hybrid model named Local–Global UNet Transformer. In our model, we introduce: (1) a semantic-oriented masked attention to enhance the feature extraction capability of the decoder; and (2) network-in-network blocks to increase channel modeling complexity in the encoder while reducing the parameter consumption associated with residual blocks. We evaluate our model on two public brain tumor segmentation datasets, and the experimental results demonstrate that our model achieves the highest average Dice score on the BraTS2024-GLI dataset and ranks second on the BraTS2023-GLI dataset. In terms of HD95, our model attains the lowest values on both datasets. Furthermore, the ablation study proves the effectiveness of our model design. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Processing and Analysis)
Show Figures

Figure 1

22 pages, 3829 KiB  
Article
Brain Tumour Segmentation and Grading Using Local and Global Context-Aggregated Attention Network Architecture
by Ahmed Abdulhakim Al-Absi, Rui Fu, Nadhem Ebrahim, Mohammed Abdulhakim Al-Absi and Dae-Ki Kang
Bioengineering 2025, 12(5), 552; https://doi.org/10.3390/bioengineering12050552 - 21 May 2025
Viewed by 619
Abstract
Brain tumours (BTs) are among the most dangerous and life-threatening cancers in humans of all ages, and the early detection of BTs can make a huge difference to their treatment. However, grade recognition is a challenging issue for radiologists involved in automated diagnosis [...] Read more.
Brain tumours (BTs) are among the most dangerous and life-threatening cancers in humans of all ages, and the early detection of BTs can make a huge difference to their treatment. However, grade recognition is a challenging issue for radiologists involved in automated diagnosis and healthcare monitoring. Recent research has been motivated by the search for deep learning-based mechanisms for segmentation and grading to assist radiologists in diagnostic analysis. Segmentation refers to the identification and delineation of tumour regions in medical images, while classification classifies based on tumour characteristics, such as the size, location and enhancement pattern. The main aim of this research is to design and develop an intelligent model that can detect and grade tumours more effectively. This research develops an aggregated architecture called LGCNet, which combines a local context attention network and a global context attention network. LGCNet makes use of information extracted through the task, dimension and scale. Specifically, a global context attention network is developed for capturing multiple-scale features, and a local context attention network is designed for specific tasks. Thereafter, both networks are aggregated, and the learning network is designed to balance all the tasks by combining the loss functions of the classification and segmentation. The main advantage of LGCNet is its dedicated network for a specific task. The proposed model is evaluated by considering the BraTS2019 dataset with different metrics, such as the Dice score, sensitivity, specificity and Hausdorff score. Comparative analysis with the existing model shows marginal improvement and provides scope for further research into BT segmentation and classification. The scope of this study focuses on the BraTS2019 dataset, with future work aiming to extend the applicability of the model to different clinical and imaging environments. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

16 pages, 3511 KiB  
Article
Frequency-Aware Diffusion Model for Multi-Modal MRI Image Synthesis
by Mingfeng Jiang, Peihang Jia, Xin Huang, Zihan Yuan, Dongsheng Ruan, Feng Liu and Ling Xia
J. Imaging 2025, 11(5), 152; https://doi.org/10.3390/jimaging11050152 - 11 May 2025
Viewed by 1209
Abstract
Magnetic Resonance Imaging (MRI) is a widely used, non-invasive imaging technology that plays a critical role in clinical diagnostics. Multi-modal MRI, which combines images from different modalities, enhances diagnostic accuracy by offering comprehensive tissue characterization. Meanwhile, multi-modal MRI enhances downstream tasks, like brain [...] Read more.
Magnetic Resonance Imaging (MRI) is a widely used, non-invasive imaging technology that plays a critical role in clinical diagnostics. Multi-modal MRI, which combines images from different modalities, enhances diagnostic accuracy by offering comprehensive tissue characterization. Meanwhile, multi-modal MRI enhances downstream tasks, like brain tumor segmentation and image reconstruction, by providing richer features. While recent advances in diffusion models (DMs) show potential for high-quality image translation, existing methods still struggle to preserve fine structural details and ensure accurate image synthesis in medical imaging. To address these challenges, we propose a Frequency-Aware Diffusion Model (FADM) for generating high-quality target modality MRI images from source modality images. The FADM incorporates a discrete wavelet transform within the diffusion model framework to extract both low- and high-frequency information from MRI images, enhancing the capture of tissue structural and textural features. Additionally, a wavelet downsampling layer and supervision module are incorporated to improve frequency awareness and optimize high-frequency detail extraction. Experimental results on the BraTS 2021 dataset and a 1.5T–3T MRI dataset demonstrate that the FADM outperforms existing generative models, particularly in preserving intricate brain structures and tumor regions while generating high-quality MRI images. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

15 pages, 1616 KiB  
Article
DiffBTS: A Lightweight Diffusion Model for 3D Multimodal Brain Tumor Segmentation
by Zuxin Nie, Jiahong Yang, Chengxuan Li, Yaqin Wang and Jun Tang
Sensors 2025, 25(10), 2985; https://doi.org/10.3390/s25102985 - 9 May 2025
Viewed by 900
Abstract
Denoising diffusion probabilistic models (DDPMs) have achieved remarkable success across various research domains. However, their high complexity when processing 3D images remains a limitation. To mitigate this, researchers typically preprocess data into 2D slices, enabling the model to perform segmentation in a reduced [...] Read more.
Denoising diffusion probabilistic models (DDPMs) have achieved remarkable success across various research domains. However, their high complexity when processing 3D images remains a limitation. To mitigate this, researchers typically preprocess data into 2D slices, enabling the model to perform segmentation in a reduced 2D space. This paper introduces DiffBTS, an end-to-end, lightweight diffusion model specifically designed for 3D brain tumor segmentation. DiffBTS replaces the conventional self-attention module in the traditional diffusion models by introducing an efficient 3D self-attention mechanism. The mechanism is applied between down-sampling and jump connections in the model, allowing it to capture long-range dependencies and global semantic information more effectively. This design prevents computational complexity from growing in square steps. Prediction accuracy and model stability are crucial in brain tumor segmentation; we propose the Edge-Blurring Guided (EBG) algorithm, which directs the diffusion model to focus more on the accuracy of segmentation boundaries during the iterative sampling process. This approach enhances prediction accuracy and stability. To assess the performance of DiffBTS, we compared it with seven state-of-the-art models on the BraTS 2020 and BraTS 2021 datasets. DiffBTS achieved an average Dice score of 89.99 and an average HD95 value of 1.928 mm on BraTS2021 and 86.44 and 2.466 mm on BraTS2020, respectively. Extensive experimental results demonstrate that DiffBTS achieves state-of-the-art performance in brain tumor segmentation, outperforming all competing models. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 2258 KiB  
Article
A Multidimensional Particle Swarm Optimization-Based Algorithm for Brain MRI Tumor Segmentation
by Zsombor Boga, Csanád Sándor and Péter Kovács
Sensors 2025, 25(9), 2800; https://doi.org/10.3390/s25092800 - 29 Apr 2025
Cited by 2 | Viewed by 791
Abstract
Particle Swarm Optimization (PSO) has been extensively applied to optimization tasks in various domains, including image segmentation. In this work, we present a clustering-based segmentation algorithm that employs a multidimensional variant of PSO. Unlike conventional methods that require a predefined number of segments, [...] Read more.
Particle Swarm Optimization (PSO) has been extensively applied to optimization tasks in various domains, including image segmentation. In this work, we present a clustering-based segmentation algorithm that employs a multidimensional variant of PSO. Unlike conventional methods that require a predefined number of segments, our approach automatically selects an optimal segmentation granularity based on specified similarity criteria. This strategy effectively isolates brain tumors by incorporating both grayscale intensity and spatial information across multiple MRI modalities, allowing the method to be reliably tuned using a limited amount of training data. We further demonstrate how integrating these initial segmentations with a random forest classifier (RFC) enhances segmentation precision. Using MRI data from the RSNA-ASNR-MICCAI brain tumor segmentation (BraTS) challenge, our method achieves robust results with reduced reliance on extensive labeled datasets, offering a more efficient path toward accurate, clinically relevant tumor segmentation. Full article
(This article belongs to the Special Issue Sensors and Machine-Learning Based Signal Processing)
Show Figures

Figure 1

38 pages, 2189 KiB  
Review
Advanced Deep Learning and Machine Learning Techniques for MRI Brain Tumor Analysis: A Review
by Rim Missaoui, Wided Hechkel, Wajdi Saadaoui, Abdelhamid Helali and Marco Leo
Sensors 2025, 25(9), 2746; https://doi.org/10.3390/s25092746 - 26 Apr 2025
Cited by 2 | Viewed by 3203
Abstract
A brain tumor is the result of abnormal growth of cells in the central nervous system (CNS), widely considered as a complex and diverse clinical entity that is difficult to diagnose and cure. In this study, we focus on current advances in medical [...] Read more.
A brain tumor is the result of abnormal growth of cells in the central nervous system (CNS), widely considered as a complex and diverse clinical entity that is difficult to diagnose and cure. In this study, we focus on current advances in medical imaging, particularly magnetic resonance imaging (MRI), and how machine learning (ML) and deep learning (DL) algorithms might be combined with clinical assessments to improve brain tumor diagnosis. Due to its superior contrast resolution and safety compared to other imaging methods, MRI is highlighted as the preferred imaging modality for brain tumors. The challenges related to brain tumor analysis in different processes including detection, segmentation, classification, and survival prediction are addressed along with how ML/DL approaches significantly improve these steps. We systematically analyzed 107 studies (2018–2024) employing ML, DL, and hybrid models across publicly available datasets such as BraTS, TCIA, and Figshare. In the light of recent developments in brain tumor analysis, many algorithms have been proposed to accurately obtain ontological characteristics of tumors, enhancing diagnostic precision and personalized therapeutic strategies. Full article
(This article belongs to the Special Issue Feature Papers in Smart Sensing and Intelligent Sensors 2025)
Show Figures

Figure 1

29 pages, 8972 KiB  
Article
GMDNet: Grouped Encoder-Mixer-Decoder Architecture Based on the Role of Modalities for Brain Tumor MRI Image Segmentation
by Peng Yang, Ruihao Zhang, Can Hu and Bin Guo
Electronics 2025, 14(8), 1658; https://doi.org/10.3390/electronics14081658 - 19 Apr 2025
Viewed by 425
Abstract
Although deep learning has significantly advanced brain tumor MRI segmentation and preoperative planning, existing methods like U-Net and Transformer, which are widely used Encoder–Decoder architectures in medical image segmentation, still have limitations. Specifically, these methods fail to fully leverage the unique characteristics of [...] Read more.
Although deep learning has significantly advanced brain tumor MRI segmentation and preoperative planning, existing methods like U-Net and Transformer, which are widely used Encoder–Decoder architectures in medical image segmentation, still have limitations. Specifically, these methods fail to fully leverage the unique characteristics of different MRI modalities during the feature extraction stage, thereby hindering further improvements in segmentation accuracy. Currently, MRI modalities are typically treated as independent entities or as uncorrelated features during feature extraction, neglecting their potential interdependencies. To address this gap, we introduce the GMD architecture (Grouped Encoder-Mixer-Decoder), which is designed to enhance information capture during feature extraction by considering the intercorrelation and complementary nature of different modalities. In the proposed GMD architecture, input images are first grouped by modality in the grouped encoder based on a modality-specific strategy. The extracted features are then fused and optimized in the mixer module, and the final segmentation is achieved through the decoder. We implement this architecture in GMDNet to validate its effectiveness. Experiments demonstrate that GMDNet not only achieves outstanding performance under complete modality conditions but also maintains robust performance even when certain modalities are missing. To further enhance performance in incomplete modality, we propose an innovative reuse modality strategy that significantly improves segmentation accuracy compared to conventional approaches. We evaluated the performance of GMDNet on the BraTS 2018 and BraTS 2021 datasets. Under complete modality conditions, GMDNet achieved Dice scores of 91.21%, 87.11%, 80.97%, and 86.43% for WT (Whole Tumor), TC (Tumor Core), ET (Enhancing Tumor) and Average on the BraTS 2018, and 91.87%, 87.25%, 83.16%, and 87.42% on the BraTS 2021. Under incomplete modality conditions, when T1, T1ce, T2, and FLAIR were missing, the Dice scores on the BraTS 2021 dataset were 86.47%, 73.29%, 86.46%, and 82.54%, respectively. After applying the reuse modality strategy, the scores improved to 87.17%, 75.07%, 86.91%, and 86.22%. Overall, extensive experiments demonstrate that proposed GMDNet architecture achieves state-of-the-art performance, outperforming the compared models of this paper in complete or incomplete modality. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

23 pages, 5946 KiB  
Article
MEASegNet: 3D U-Net with Multiple Efficient Attention for Segmentation of Brain Tumor Images
by Ruihao Zhang, Peng Yang, Can Hu and Bin Guo
Appl. Sci. 2025, 15(7), 3791; https://doi.org/10.3390/app15073791 - 30 Mar 2025
Viewed by 1232
Abstract
Brain tumors are a type of disease that affects people’s health and have received extensive attention. Accurate segmentation of Magnetic Resonance Imaging (MRI) images for brain tumors is essential for effective treatment strategies. However, there is scope for enhancing the segmentation accuracy of [...] Read more.
Brain tumors are a type of disease that affects people’s health and have received extensive attention. Accurate segmentation of Magnetic Resonance Imaging (MRI) images for brain tumors is essential for effective treatment strategies. However, there is scope for enhancing the segmentation accuracy of established deep learning approaches, such as 3D U-Net. In pursuit of improved segmentation precision for brain tumor MRI images, we propose the MEASegNet, which incorporates multiple efficient attention mechanisms into the 3D U-Net architecture. The encoder employs Parallel Channel and Spatial Attention Block (PCSAB), the bottleneck layer leverages Channel Reduce Residual Atrous Spatial Pyramid Pooling (CRRASPP) attention, and the decoder layer incorporates Selective Large Receptive Field Block (SLRFB). Through the integration of various attention mechanisms, we enhance the capacity for detailed feature extraction, facilitate the interplay among distinct features, and ensure the retention of more comprehensive feature information. Consequently, this leads to an enhancement in the segmentation precision of 3D U-Net for brain tumor MRI images. In conclusion, our extensive experimentation on the BraTS2021 dataset yields Dice scores of 92.50%, 87.49%, and 84.16% for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET), respectively. These results indicate a marked improvement in segmentation accuracy over the conventional 3D U-Net. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

10 pages, 895 KiB  
Opinion
Latest News from the “Guardian”: p53 Directly Activates Asymmetric Stem Cell Division Regulators
by Ana Carmena
Int. J. Mol. Sci. 2025, 26(7), 3171; https://doi.org/10.3390/ijms26073171 - 29 Mar 2025
Viewed by 722
Abstract
Since its discovery in 1979, the human tumor suppressor gene TP53—also known as the “guardian of the genome”—has been the subject of intense research. Mutated in most human cancers, TP53 has traditionally been considered a key fighter against stress factors by trans-activating [...] Read more.
Since its discovery in 1979, the human tumor suppressor gene TP53—also known as the “guardian of the genome”—has been the subject of intense research. Mutated in most human cancers, TP53 has traditionally been considered a key fighter against stress factors by trans-activating a network of target genes that promote cell cycle arrest, DNA repair, or apoptosis. Intriguingly, over the past years, novel non-canonical functions of p53 in unstressed cells have also emerged, including the mode of stem cell division regulation. However, the mechanisms by which p53 modulates these novel functions remain incompletely understood. In a recent work, we found that Drosophila p53 controls asymmetric stem cell division (ASCD) in neural stem cells by transcriptionally activating core ASCD regulators, such as the conserved cell-fate determinants Numb and Brat (NUMB and TRIM3/TRIM2/TRIM32 in humans, respectively). In this short communication, we comment on this new finding, the mild phenotypes associated with Drosophila p53 mutants in this context, as well as novel avenues for future research. Full article
Show Figures

Figure 1

Back to TopTop