Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (88)

Search Parameters:
Keywords = BraTS challenges

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1349 KB  
Article
HAAU-Net: Hybrid Adaptive Attention U-Net Integrated with Context-Aware Morphologically Stable Features for Real-Time MRI Brain Tumor Detection and Segmentation
by Muhammad Adeel Asghar, Sultan Shoaib and Muhammad Zahid
Tomography 2026, 12(4), 44; https://doi.org/10.3390/tomography12040044 - 25 Mar 2026
Viewed by 314
Abstract
Background: The Magnetic Resonance Imaging (MRI)-based tumor segmentation remains a challenging problem in medical imaging due to tumor heterogeneity, unpredictable morphological features, and the high complexity of calculations needed to implement it in clinical practice, putting it out of the scope of real-time [...] Read more.
Background: The Magnetic Resonance Imaging (MRI)-based tumor segmentation remains a challenging problem in medical imaging due to tumor heterogeneity, unpredictable morphological features, and the high complexity of calculations needed to implement it in clinical practice, putting it out of the scope of real-time applications. Although neural networks have significantly improved segmentation performance, they still struggle to capture morphological tumor features while maintaining computational efficiency. This work introduces Hybrid Adaptive Attention U-Net (HAAU-Net) framework, combining context-aware morphologically stable features and spatial channel attention to achieve high-quality tumor segmentation with less computational cost. Methods: The proposed HAAU-Net framework integrates multi-scale Adaptive Attention Blocks (AAB), Context-Aware Morphological Feature Module (CAMFM) and Spatial-Channel Hybrid Attention Mechanism (SCHAM). CAMFM is used to maintain the stability of morphological features by hierarchical aggregation and dynamic normalization of features. SCHAM enhances feature representation by modelling channels and spatial regions where the strongest feature are determined to use in segmentation. On the BRaTS 2022/2023 data, the proposed HAAU-Net is evaluated using four modalities including T1, T1GD, T2 and T2-FLAIR sequences. Results: The proposed model able to obtain 96.8% segmentation accuracy with a Dice coefficient of 0.89 on the entire tumor region, outperforming the alternative U-Net (0.83) and conventional CNN methods of segmentation (0.81). The proposed HAAU-Net architecture cuts the computational complexity of the standard deep learning models by 43% and still achieve real-time inference (28 FPS on a regular GPU). The hybrid model used to predict survival has a C-Index of 0.91 which is higher than the traditional SVM-based methods (0.72). Conclusions: Spatial-channel attention, combined with morphologically stable features, can be combined to allow clinically significant interpretability in attention maps. The proposed framework significantly improves segmentation performance while maintaining computational effeciency. This broad system has a serious potential of AI-enabled clinical decision support system and early prognostic diagnosis in neuro-oncology with practical deployment capability. Full article
Show Figures

Figure 1

46 pages, 3952 KB  
Article
A Hybrid Particle Swarm–Genetic Algorithm Framework for U-Net Hyperparameter Optimization in High-Precision Brain Tumor MRI Segmentation
by Shoffan Saifullah, Rafał Dreżewski, Anton Yudhana, Radius Tanone and Andiko Putro Suryotomo
Appl. Sci. 2026, 16(6), 3041; https://doi.org/10.3390/app16063041 - 21 Mar 2026
Viewed by 284
Abstract
Accurate and robust brain tumor segmentation remains a critical challenge in medical image analysis due to high inter-patient variability, complex tumor morphology, and modality-specific noise in MRI scans. This study proposes PSO-GA-U-Net, a novel hybrid deep learning framework that integrates Particle Swarm Optimization [...] Read more.
Accurate and robust brain tumor segmentation remains a critical challenge in medical image analysis due to high inter-patient variability, complex tumor morphology, and modality-specific noise in MRI scans. This study proposes PSO-GA-U-Net, a novel hybrid deep learning framework that integrates Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) to optimize the U-Net architecture, enhancing segmentation performance and generalization. PSO dynamically tunes the learning rate to accommodate modality-specific variations, while the GA adaptively regulates dropout to improve feature diversity and reduce overfitting. The model was evaluated on three benchmark datasets—FBTS, BraTS 2021, and BraTS 2018—using five-fold cross-validation. PSO-GA-U-Net achieves Dice Similarity Coefficients (DSC) of 0.9587, 0.9406, and 0.9480 and Jaccard Index (JI) scores of 0.9209, 0.8881, and 0.9024, respectively, consistently outperforming state-of-the-art models in both overlap accuracy and boundary delineation. Statistical tests confirm that these improvements are significant across folds (p<0.05). Visual heatmaps further illustrate the model’s ability to preserve structural integrity across tumor types and modalities. These results indicate that metaheuristic-guided deep learning offers a promising and clinically applicable solution for automatic tumor segmentation in radiological workflows. Full article
(This article belongs to the Special Issue Advanced Techniques and Applications in Magnetic Resonance Imaging)
Show Figures

Figure 1

20 pages, 1689 KB  
Article
Optimization-Driven Multimodal Brain Tumor Segmentation Using α-Expansion Graph Cuts
by Roaa Soloh, Bilal Nakhal and Abdallah El Chakik
Computation 2026, 14(3), 70; https://doi.org/10.3390/computation14030070 - 15 Mar 2026
Viewed by 345
Abstract
Precise segmentation of brain tumors from multimodal MRI scans is essential for accurate neuro-oncological diagnosis and treatment planning. To address this challenge, we propose a label-free optimization-driven segmentation framework based on the α-expansion graph cut algorithm, offering improved computational efficiency and interpretability [...] Read more.
Precise segmentation of brain tumors from multimodal MRI scans is essential for accurate neuro-oncological diagnosis and treatment planning. To address this challenge, we propose a label-free optimization-driven segmentation framework based on the α-expansion graph cut algorithm, offering improved computational efficiency and interpretability compared to deep learning alternatives. The method relies on structured optimization and handcrafted features, including local intensity patches, entropy-based texture descriptors, and statistical moments, to compute voxel-wise unary potentials via gradient-boosted decision trees (XGBoost). These are integrated with spatially adaptive pairwise terms within a graph model optimized through α-expansion. Evaluation on 146 BraTS validation volumes demonstrates reliable whole-tumor overlap, with a mean Dice score of 0.855 ± 0.184 and a 95% Hausdorff distance of 18.66 mm. Bootstrap analysis confirms the statistical stability of these results. The low computational overhead and modular design make the method particularly suitable for transparent and resource-constrained clinical deployment scenarios. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

24 pages, 3704 KB  
Article
Source-Free Active Domain Adaptation for Brain Tumor Segmentation via Mamba and Region-Level Uncertainty
by Haowen Zheng, Che Wang, Yudan Zhou, Congbo Cai and Zhong Chen
Brain Sci. 2026, 16(3), 300; https://doi.org/10.3390/brainsci16030300 - 8 Mar 2026
Viewed by 478
Abstract
Background/Objectives: Accurate brain tumor segmentation from MRI is crucial for diagnosis but faces challenges like domain shifts across medical centers, data privacy constraints, and high annotation costs. While source-free active domain adaptation (SFADA) emerges as a promising solution to these issues, existing approaches [...] Read more.
Background/Objectives: Accurate brain tumor segmentation from MRI is crucial for diagnosis but faces challenges like domain shifts across medical centers, data privacy constraints, and high annotation costs. While source-free active domain adaptation (SFADA) emerges as a promising solution to these issues, existing approaches often overlook the inherent structural complexity in tumor regions. Methods: We propose a novel SFADA framework composed of two major contributions. First, we introduce a Region-level Uncertainty-Guided Sample Selection (RUGS) strategy, enabling the identification of the most informative target-domain samples in a single inference pass. Second, we present the Source-Free Active Domain Adaptation Network (SFADA-Net), a Mamba-driven segmentation model equipped with a dual-path multi-kernel convolution module for enhanced local feature interaction and a structure-aware prompted Mamba module for capturing global spatial relationships. Results: Extensive evaluations across one source domain dataset (BraTS-2021) and three target domain datasets (BraTS-SSA, BraTS-PED, and BraTS-MEN 2023) demonstrate the superior adaptability of the proposed method, achieving consistently high segmentation accuracy across domains. With only 5% annotation budget, our framework consistently outperforms state-of-the-art segmentation and domain adaptation methods, achieving robust segmentation accuracy across diverse domains and approaching the performance of fully supervised learning. Conclusions: The proposed method achieves superior accuracy in brain tumor region segmentation and precise boundary delineation under a limited annotation budget. It effectively mitigates domain shift while fully complying with data privacy regulations. Consequently, our framework relieves manual annotation bottlenecks and accelerates the cross-center deployment of accurate diagnostic tools, facilitating the clinical application of domain adaptation. Full article
Show Figures

Figure 1

30 pages, 8409 KB  
Article
SCAG-Net: Automated Brain Tumor Prediction from MRI Using Cuttlefish-Optimized Attention-Based Graph Networks
by Vijay Govindarajan, Ashit Kumar Dutta, Amr Yousef, Mohd Anjum, Ali Elrashidi and Sana Shahab
Diagnostics 2026, 16(4), 565; https://doi.org/10.3390/diagnostics16040565 - 13 Feb 2026
Viewed by 536
Abstract
Background/Objectives: The earlier, more accurate, and more consistent prediction of the brain tumor recognition process requires automated systems to minimize diagnostic delays and human error. The automated system provides a platform for handling large medical images, speeding up clinical decision-making. However, the existing [...] Read more.
Background/Objectives: The earlier, more accurate, and more consistent prediction of the brain tumor recognition process requires automated systems to minimize diagnostic delays and human error. The automated system provides a platform for handling large medical images, speeding up clinical decision-making. However, the existing system is facing difficulties due to the high variability in tumor location, size, and shape, which leads to segmentation complexity. In addition, glioma-related tumors infiltrate the brain tissues, making it challenging to identify the exact tumor region. Method: The above-identified research difficulties are overcome by applying the Swin-UNet with cuttlefish-optimized attention-based Graph Neural Networks (SCAG-Net), thereby improving overall brain tumor recognition accuracy. This integrated approach is utilized to address infiltrative gliomas, tumor variability, and feature redundancy issues by improving diagnostic efficiency. Initially, the collected MRI images are processed using the Swin-UNet approach to identify the region, minimizing prediction error robustly. The region’s features are explored utilizing the cuttlefish algorithm, which minimizes redundant features and speeds up classification by improving accuracy. The selected features are further processed using the attention graph network, which handles structural and heterogeneous information across multiple layers, improving classification accuracy compared to existing methods. Results: The efficiency of the system, implemented with the help of public datasets such as BRATS 2018, BRATS 2019, BRATS 2020, and Figshare is ensured by the proposed SCAG-Net approach, which achieves maximum recognition accuracy. The proposed system achieved a Dice coefficient of 0.989, an Intersection over Union of 0.969, and a classification accuracy of 0.992. This performance surpassed the most recent benchmark models by margins of 1.0% to 1.8% and with statistically significant differences (p < 0.05). These findings present a statistically validated, computationally efficient, clinically deployable framework. Conclusions: The effective analysis of MRI complex structures is used in medical applications and clinical analysis. The proposed SCAG-Net framework significantly improves brain tumor recognition by addressing tumor heterogeneity and infiltrative gliomas using MRI images. The proposed approach provides a robust, efficient, and clinically deployable solution for brain tumor recognition from MRI images, supporting accurate and rapid diagnosis while maintaining expert-level performance. Full article
Show Figures

Figure 1

17 pages, 1286 KB  
Article
Brain Tumor Segmentation with Contextual Transformer-Based U-Net
by Shakhnoza Muksimova, Jushkin Baltaev and Young Im Cho
Electronics 2026, 15(4), 782; https://doi.org/10.3390/electronics15040782 - 12 Feb 2026
Viewed by 551
Abstract
Presently, the segmentation of brain tumors from magnetic resonance imaging (MRI) scans is a very important challenge in the medical area, and it has a huge impact on correct diagnosis, efficient treatment planning, and patient prognosis. We present here the Contextual Transformer U-Net [...] Read more.
Presently, the segmentation of brain tumors from magnetic resonance imaging (MRI) scans is a very important challenge in the medical area, and it has a huge impact on correct diagnosis, efficient treatment planning, and patient prognosis. We present here the Contextual Transformer U-Net (CT-UNet), a novel deep learning approach that can significantly increase the accuracy and speed of brain tumor segmentation. The CT-UNet method features Transformer blocks embedded in a U-Net layout that extracts the most important contextual information across different types of MRI sequences, thereby drastically refining the delineation of tumor regions. We have tested CT-UNet on the Brain Tumor Segmentation (BraTS) challenge dataset that includes a large variety of tumor types, localization, and progression stages. To check the model’s performance, we used the Dice coefficient, sensitivity, specificity, precision, and Hausdorff distance metrics. The findings from our experiments demonstrate that CT-UNet has a substantial advantage over the classical segmentation model, and the 0.92 Dice coefficient it has achieved testifies to its state-of-the-art tumor localization in terms of both extent and form. Besides that, CT-UNet has achieved a very high sensitivity (0.90) and specificity (0.94); thus, it has been perfectly capable of discriminating tumor from non-tumor tissues. Spatial accuracy has also been improved significantly, as can be seen from the 7.5 mm Hausdorff distance achieved by this model, which means it can closely replicate the given tumor boundaries. By employing dynamic modality fusion and incorporating the Transformer mechanism into the established U-Net architecture, we have raised the bar for brain tumor segmentation. Our solution paves the way for another breakthrough in medical imaging technologies. CT-UNet not only speeds up the workflow of radiologists but also facilitates more targeted therapeutic strategies that may result in better patient care and prognosis. Yet the main goal of this work is to provide a basis for future studies that can consider incorporating deep learning methods in a routine clinical setting, thus paving the way for healthcare providers to benefit from both technical and clinical advantages. Full article
Show Figures

Figure 1

12 pages, 781 KB  
Proceeding Paper
Bayesian Optimization-Driven U-Net Architecture Tuning for Brain Tumor Segmentation
by Shoffan Saifullah and Rafał Dreżewski
Eng. Proc. 2026, 124(1), 22; https://doi.org/10.3390/engproc2026124022 - 9 Feb 2026
Viewed by 513
Abstract
Precise brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for clinical diagnosis and treatment planning. However, determining an optimal deep learning architecture for such tasks remains a challenge due to the vast hyperparameter space and structural variations. This paper presents [...] Read more.
Precise brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for clinical diagnosis and treatment planning. However, determining an optimal deep learning architecture for such tasks remains a challenge due to the vast hyperparameter space and structural variations. This paper presents a novel approach that integrates Bayesian Optimization (BO) to automatically tune the U-Net architecture for effective brain tumor segmentation. The proposed BO-UNet framework searches over encoder, bottleneck, and decoder configurations using a Gaussian Process-based surrogate model, guided by a fitness function derived from Dice Similarity Coefficient (DSC) and Jaccard Index (JI). Experiments were conducted on two benchmark datasets: the Figshare Brain Tumor Segmentation (FBTS) dataset and the BraTS 2021 dataset (focused on Whole Tumor segmentation). The best-discovered architecture [64, 64, 64, 256, 64, 128, 256] achieved notable performance: on the FBTS dataset, it reached 0.9503 DSC and 0.9054 JI; on BraTS 2021, it obtained 0.9261 DSC and 0.8631 JI, outperforming several state-of-the-art methods. Convergence and segmentation-map evolution confirm that BO effectively guided the architectural search process. These findings demonstrate the potential of BO-driven deep learning in medical imaging, opening new avenues for architecture-level optimization with minimal manual intervention. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

36 pages, 1319 KB  
Review
A Review of U-Net Based Deep Learning Frameworks for MRI-Based Brain Tumor Segmentation
by Ayse Bastug Koc and Devrim Akgun
Diagnostics 2026, 16(4), 506; https://doi.org/10.3390/diagnostics16040506 - 7 Feb 2026
Viewed by 751
Abstract
Automated segmentation of brain tumors from Magnetic Resonance Imaging (MRI) images is helpful for clinical diagnosis, surgical planning, and post-treatment monitoring. In recent years, the U-Net architecture has been observed as one of the most popular solutions among deep learning models. This article [...] Read more.
Automated segmentation of brain tumors from Magnetic Resonance Imaging (MRI) images is helpful for clinical diagnosis, surgical planning, and post-treatment monitoring. In recent years, the U-Net architecture has been observed as one of the most popular solutions among deep learning models. This article presents a review of 35 studies published between 2019 and 2025 focusing on U-Net-based brain tumor segmentation. The primary focus of this review is an in-depth analysis of commonly used U-Net architectures. The transformation of original 2D and 3D models into more advanced variants is examined in detail. Results from a wide range of studies are synthesized, and standard evaluation criteria are summarized along with benchmark datasets such as the BRATS competition to validate the effectiveness of these models. Additionally, the paper overviews the recent developments in the field, determines fundamental challenges, and provides insight into future directions, including improving model efficiency and generalization, combining multimodal data, and advancing clinical applications. This review serves as a guide for researchers to examine the impact of the U-Net architecture on brain tumor segmentation. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

20 pages, 2026 KB  
Article
Unified Adult–Pediatric Glioma Segmentation via Synergistic MAE Pretraining and Boundary-Aware Refinement
by Moldir Zharylkassynova, Jaepil Ko and Kyungjoo Cheoi
Electronics 2026, 15(2), 329; https://doi.org/10.3390/electronics15020329 - 12 Jan 2026
Viewed by 423
Abstract
Accurate brain tumor segmentation in both adult and pediatric populations remains a challenge due to substantial differences in brain anatomy, tumor distribution, and subregion size. This study proposes a unified segmentation framework based on nnU-Net, integrating encoder-level self-supervised pretraining with a lightweight, boundary-aware [...] Read more.
Accurate brain tumor segmentation in both adult and pediatric populations remains a challenge due to substantial differences in brain anatomy, tumor distribution, and subregion size. This study proposes a unified segmentation framework based on nnU-Net, integrating encoder-level self-supervised pretraining with a lightweight, boundary-aware decoder. The encoder is initialized using a large-scale 3D masked autoencoder pretrained on brain MRI, while the decoder is trained with a hybrid loss function that combines region-overlap and boundary-sensitive terms. A harmonized training and evaluation protocol is applied to both the BraTS-GLI (adult) and BraTS-PED (pediatric) cohorts, enabling fair cross-cohort comparison against baseline and advanced nnU-Net variants. The proposed method improves mean Dice scores from 0.76 to 0.90 for adults and from 0.64 to 0.78 for pediatric cases, while reducing HD95 from 4.42 to 2.24 mm and from 9.03 to 6.23 mm, respectively. These results demonstrate that combining encoder-level pretraining with decoder-side boundary supervision significantly enhances segmentation accuracy across age groups without adding inference-time computational overhead. Full article
(This article belongs to the Special Issue AI-Driven Medical Image/Video Processing)
Show Figures

Figure 1

33 pages, 5328 KB  
Article
AI-Guided Inference of Morphodynamic Attractor-like States in Glioblastoma
by Simona Ruxandra Volovăț, Diana Ioana Panaite, Mădălina Raluca Ostafe, Călin Gheorghe Buzea, Dragoș Teodor Iancu, Maricel Agop, Lăcrămioara Ochiuz, Dragoș Ioan Rusu and Cristian Constantin Volovăț
Diagnostics 2026, 16(1), 139; https://doi.org/10.3390/diagnostics16010139 - 1 Jan 2026
Viewed by 809
Abstract
Background/Objectives: Glioblastoma (GBM) exhibits heterogeneous, nonlinear invasion patterns that challenge conventional modeling and radiomic prediction. Most deep learning approaches describe the morphology but rarely capture the dynamical stability of tumor evolution. We propose an AI framework that approximates a latent attractor landscape [...] Read more.
Background/Objectives: Glioblastoma (GBM) exhibits heterogeneous, nonlinear invasion patterns that challenge conventional modeling and radiomic prediction. Most deep learning approaches describe the morphology but rarely capture the dynamical stability of tumor evolution. We propose an AI framework that approximates a latent attractor landscape of GBM morphodynamics—stable basins in a continuous manifold that are consistent with reproducible morphologic regimes. Methods: Multimodal MRI scans from BraTS 2020 (n = 494) were standardized and embedded with a 3D autoencoder to obtain 128-D latent representations. Unsupervised clustering identified latent basins (“attractors”). A neural ordinary differential equation (neural-ODE) approximated latent dynamics. All dynamics were inferred from cross-sectional population variability rather than longitudinal follow-up, serving as a proof-of-concept approximation of morphologic continuity. Voxel-level perturbation quantified local morphodynamic sensitivity, and proof-of-concept control was explored by adding small inputs to the neural-ODE using both a deterministic controller and a reinforcement learning agent based on soft actor–critic (SAC). Survival analyses (Kaplan–Meier, log-rank, ridge-regularized Cox) assessed associations with outcomes. Results: The learned latent manifold was smooth and clinically organized. Three dominant attractor basins were identified with significant survival stratification (χ2 = 31.8, p = 1.3 × 10−7) in the static model. Dynamic attractor basins derived from neural-ODE endpoints showed modest and non-significant survival differences, confirming that these dynamic labels primarily encode the morphodynamic structure rather than fixed prognostic strata. Dynamic basins inferred from neural-ODE flows were not independently prognostic, indicating that the inferred morphodynamic field captures geometric organization rather than additional clinical risk information. The latent stability index showed a weak but borderline significant negative association with survival (ρ = −0.13 [−0.26, −0.01]; p = 0.0499). In multivariable Cox models, age remained the dominant covariate (HR = 1.30 [1.16–1.45]; p = 5 × 10−6), with overall C-indices of 0.61–0.64. Voxel-level sensitivity maps highlighted enhancing rims and peri-necrotic interfaces as influential regions. In simulation, deterministic control redirected trajectories toward lower-risk basins (≈57% success; ≈96% terminal distance reduction), while a soft actor–critic (SAC) agent produced smoother trajectories and modest additional reductions in terminal distance, albeit without matching the deterministic controller’s success rate. The learned attractor classes were internally consistent and clinically distinct. Conclusions: Learning a latent attractor landscape links generative AI, dynamical systems theory, and clinical outcomes in GBM. Although limited by the cross-sectional nature of BraTS and modest prognostic gains beyond age, these results provide a mechanistic, controllable framework for tumor morphology in which inferred dynamic attractor-like flows describe latent organization rather than a clinically predictive temporal model, motivating prospective radiogenomic validation and adaptive therapy studies. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Graphical abstract

30 pages, 3535 KB  
Article
PRA-Unet: Parallel Residual Attention U-Net for Real-Time Segmentation of Brain Tumors
by Ali Zakaria Lebani, Medjeded Merati and Saïd Mahmoudi
Information 2026, 17(1), 14; https://doi.org/10.3390/info17010014 - 23 Dec 2025
Viewed by 741
Abstract
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an [...] Read more.
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an optimal balance between accuracy and computational cost remains a significant challenge. In many cases, current methods trade speed for accuracy, or vice versa, consuming substantial computing power and making them difficult to use on devices with limited resources. To address this issue, we present PRA-UNet, a lightweight deep learning model optimized for fast and accurate 2D brain tumor segmentation. Using a single 2D input, the architecture processes four types of MRI scans (FLAIR, T1, T1c, and T2). The encoder uses inverted residual blocks and bottleneck residual blocks to capture features at different scales effectively. The Convolutional Block Attention Module (CBAM) and the Spatial Attention Module (SAM) improve the bridge and skip connections by refining feature maps and making it easier to detect and localize brain tumors. The decoder uses depthwise separable convolutions, which significantly reduce computational costs without degrading accuracy. The BraTS2020 dataset shows that PRA-UNet achieves a Dice score of 95.71%, an accuracy of 99.61%, and a processing speed of 60 ms per image, enabling real-time analysis. PRA-UNet outperforms other models in segmentation while requiring less computing power, suggesting it could be suitable for deployment on lightweight edge devices in clinical settings. Its speed and reliability enable radiologists to diagnose tumors quickly and accurately, enhancing practical medical applications. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

20 pages, 6322 KB  
Article
MAEM-ResUNet: Accurate Glioma Segmentation in Brain MRI via Symmetric Multi-Directional Mamba and Dual-Attention Modules
by Deguo Yang, Boming Yang and Jie Yan
Symmetry 2026, 18(1), 1; https://doi.org/10.3390/sym18010001 - 19 Dec 2025
Viewed by 541
Abstract
Gliomas are among the most common and aggressive malignant brain tumors. Their irregular morphology and fuzzy boundaries pose substantial challenges for automatic segmentation in MRI. Accurate delineation of tumor subregions is crucial for treatment planning and outcome assessment. This study proposes MAEM-ResUNet, an [...] Read more.
Gliomas are among the most common and aggressive malignant brain tumors. Their irregular morphology and fuzzy boundaries pose substantial challenges for automatic segmentation in MRI. Accurate delineation of tumor subregions is crucial for treatment planning and outcome assessment. This study proposes MAEM-ResUNet, an extension of the ResUNet architecture that integrates three key modules: a multi-scale adaptive attention module for joint channel–spatial feature selection, a symmetric multi-directional Mamba block for long-range context modeling, and an adaptive edge attention module for boundary refinement. Experimental results on the BraTS2020 and BraTS2021 datasets demonstrate that MAEM-ResUNet outperforms mainstream methods. On BraTS2020, it achieves an average Dice Similarity Coefficient of 91.19% and an average Hausdorff Distance (HD) of 5.27 mm; on BraTS2021, the average Dice coefficient is 89.67% and the average HD is 5.87 mm, both showing improvements compared to other mainstream models. Meanwhile, ablation experiments confirm the synergistic effect of the three modules, which significantly enhances the accuracy of glioma segmentation and the precision of boundary localization. Full article
Show Figures

Figure 1

24 pages, 596 KB  
Article
Deep Learning-Based Fusion of Multimodal MRI Features for Brain Tumor Detection
by Bakhita Salman, Eithar Yassin, Deepak Ganta and Hermes Luna
Appl. Sci. 2025, 15(24), 13155; https://doi.org/10.3390/app152413155 - 15 Dec 2025
Viewed by 1797
Abstract
Despite advances in deep learning, brain tumor detection from MRI continues to face major challenges, including the limited robustness of single-modality models, the computational burden of transformer-based architectures, opaque fusion strategies, and the lack of efficient binary screening tools. To address these issues, [...] Read more.
Despite advances in deep learning, brain tumor detection from MRI continues to face major challenges, including the limited robustness of single-modality models, the computational burden of transformer-based architectures, opaque fusion strategies, and the lack of efficient binary screening tools. To address these issues, we propose a lightweight multimodal CNN framework that integrates T1, T2, and FLAIR MRI sequences using modality-specific encoders and a channel-wise fusion module (concatenation followed by a 1 × 1 convolution). The pipeline incorporates U-Net-based segmentation for tumor-focused patch extraction, improving localization and reducing irrelevant background. Evaluated on the BraTS 2020 dataset (7500 slices; 70/15/15 patient-level split), the proposed model achieves 93.8% accuracy, 94.1% F1-score, and 19 ms inference time. It outperforms all single-modality ablations by up to 5% and achieves competitive or superior performance to transformer-based baselines while using over 98% fewer parameters. Grad-CAM and LIME visualizations further confirm clinically meaningful tumor-region activation. Overall, this efficient and interpretable multimodal framework advances scalable brain tumor screening and supports integration into real-time clinical workflows. Full article
Show Figures

Figure 1

12 pages, 1677 KB  
Article
MRI Reflects Meningioma Biology and Molecular Risk
by Julian Canisius, Julia Schuler, Maria Goldberg, Olivia Kertels, Marie-Christin Metz, Chiara Negwer, Igor Yakushev, Bernhard Meyer, Stephanie E. Combs, Jan S. Kirschke, Denise Bernhardt, Benedikt Wiestler and Claire Delbridge
Cancers 2025, 17(22), 3665; https://doi.org/10.3390/cancers17223665 - 15 Nov 2025
Viewed by 1006
Abstract
Background/Objectives: Large-scale (epi)genomic studies have substantially advanced our understanding of the molecular landscape of meningiomas, most recently embedded in the cIMPACT-NOW update 8. As a result, molecular data are increasingly integrated into risk-adapted treatment algorithms. However, it remains uncertain to what extent [...] Read more.
Background/Objectives: Large-scale (epi)genomic studies have substantially advanced our understanding of the molecular landscape of meningiomas, most recently embedded in the cIMPACT-NOW update 8. As a result, molecular data are increasingly integrated into risk-adapted treatment algorithms. However, it remains uncertain to what extent non-invasive MRI can capture underlying molecular variation and risk. Methods: We assembled a large, single-institution cohort of 225 newly diagnosed meningiomas (WHO grades 1–3) with available preoperative MRI, as well as comprehensive epigenome-wide methylation and copy-number profiling. Tumors were segmented into core and edema regions using a state-of-the-art automated pipeline from the BraTS challenge. Radiomic features were extracted and used to train Random Forest classifiers to predict WHO grade, molecular risk, and specific alterations such as 1p loss in a hold-out test set. Results: Our models achieved accuracy above 91% for integrated molecular risk classification, 87.5% for 1p chromosomal status, and 76.8% for WHO grade prediction, with corresponding AUCs of 0.91, 0.90, and 0.89, underscoring the robustness of radiomic features in capturing histopathological and, especially, molecular characteristics. Conclusions: Preoperative MRI effectively captures the underlying molecular biology of meningiomas and may enable rapid molecular assessment to inform decision-making and prioritization of confirmatory testing. However, it is not yet ready for clinical use, showing lower accuracy for current WHO grade classification. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

24 pages, 3200 KB  
Article
Enhancing Boundary Precision and Long-Range Dependency Modeling in Medical Imaging via Unified Attention Framework
by Yi Zhu, Yawen Zhu, Hongtao Ma, Bin Li, Luyao Xiao, Xiaxu Wu and Manzhou Li
Electronics 2025, 14(21), 4335; https://doi.org/10.3390/electronics14214335 - 5 Nov 2025
Viewed by 1158
Abstract
This study addresses the common challenges in medical image segmentation and recognition, including boundary ambiguity, scale variation, and the difficulty of modeling long-range dependencies, by proposing a unified framework based on a hierarchical attention mechanism. The framework consists of a local detail attention [...] Read more.
This study addresses the common challenges in medical image segmentation and recognition, including boundary ambiguity, scale variation, and the difficulty of modeling long-range dependencies, by proposing a unified framework based on a hierarchical attention mechanism. The framework consists of a local detail attention module, a global context attention module, and a cross-scale consistency constraint module, which collectively enable adaptive weighting and collaborative optimization across different feature levels, thereby achieving a balance between detail preservation and global modeling. The framework was systematically validated on multiple public datasets, and the results demonstrated that the proposed method achieved Dice, IoU, Precision, Recall, and F1 scores of 0.886, 0.781, 0.898, 0.875, and 0.886, respectively, on the combined dataset, outperforming traditional models such as U-Net, Mask R-CNN, DeepLabV3+, SegNet, and TransUNet. On the BraTS dataset, the proposed method achieved a Dice score of 0.922, Precision of 0.930, and Recall of 0.915, exhibiting superior boundary modeling capability in complex brain MRI images. On the LIDC-IDRI dataset, the Dice score and Recall were improved from 0.751 and 0.732 to 0.822 and 0.807, respectively, effectively reducing the missed detection rate of small nodules compared to traditional convolutional models. On the ISIC dermoscopy dataset, the proposed framework achieved a Dice score of 0.914 and a Precision of 0.922, significantly improving the accuracy of skin lesion recognition. The ablation study further revealed that local detail attention significantly enhanced boundary and texture modeling, global context attention strengthened long-range dependency capture, and cross-scale consistency constraints ensured the stability and coherence of prediction results. From a medical economics perspective, the proposed framework has the potential to reduce diagnostic costs and improve healthcare efficiency by enabling faster and more accurate image-based clinical decision-making. In summary, the hierarchical attention mechanism presented in this work not only provides an innovative breakthrough in mathematical modeling but also demonstrates outstanding performance and generalization ability in experiments, offering new perspectives and technical pathways for intelligent segmentation and recognition in medical imaging. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

Back to TopTop