Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,298)

Search Parameters:
Keywords = medical-image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 15103 KB  
Article
3D Printing and Virtual Surgical Planning in Craniofacial and Thoracic Surgery: Applications to Personalised Medicine
by Freddy Patricio Moncayo-Matute, Jhonatan Heriberto Vázquez-Albornoz, Efrén Vázquez-Silva, Ana Julia Hidalgo-Bravo, Paúl Bolívar Torres-Jara and Diana Patricia Moya-Loaiza
J. Pers. Med. 2025, 15(9), 397; https://doi.org/10.3390/jpm15090397 - 25 Aug 2025
Abstract
Background/Objectives: The application of additive manufacturing in medicine, and specifically in personalised medicine, has achieved notable development. This article aims to present the results and benefits of applying a comprehensive methodology to simulate, plan, and manufacture customised three-dimensional medical prosthetic devices for use [...] Read more.
Background/Objectives: The application of additive manufacturing in medicine, and specifically in personalised medicine, has achieved notable development. This article aims to present the results and benefits of applying a comprehensive methodology to simulate, plan, and manufacture customised three-dimensional medical prosthetic devices for use in surgery to restore bone structures with congenital and acquired malformations. Methods: To digitally reconstruct a bone structure in three dimensions from a medical image, a segmentation process is developed to correlate the anatomical model. Then, this model is filtered using a post-processing step to generate stereolithography (STL) files, which are rendered using specialised software. The segmentation of tomographic images is achieved by the specific intensity selection, facilitating the analysis of compact and soft tissues within the anatomical region of interest. With the help of a thresholding algorithm, a three-dimensional digital model of the anatomical structure is obtained, ready for printing the required structure. Results: The described cases demonstrate that the use of anatomical test models, cutting guides, and customised prostheses reduces surgical time and hospital stay, and achieves better aesthetic and functional results. Using materials such as polylactic acid (PLA) for presurgical models, appropriate resins for cutting guides, and biocompatible materials such as polyether ether ketone (PEEK) or polymethylmethacrylate (PMMA) for prostheses, the described improvements are achieved. Conclusions: The achievements attained demonstrate the feasibility of applying these techniques, their advantages and their accessibility in Ecuador. They also reinforce the ideas of personalised medicine in the search for medical treatments and procedures tailored to the needs of each patient. Full article
(This article belongs to the Section Personalized Critical Care)
Show Figures

Graphical abstract

0 pages, 922 KB  
Proceeding Paper
FairCXRnet: A Multi-Task Learning Model for Domain Adaptation in Chest X-Ray Classification for Low Resource Settings
by Aminu Musa, Rajesh Prasad, Mohammed Hassan, Mohamed Hamada and Saratu Yusuf Ilu
Eng. Proc. 2025, 107(1), 16; https://doi.org/10.3390/engproc2025107016 (registering DOI) - 22 Aug 2025
Abstract
Medical imaging analysis plays a pivotal role in modern healthcare, with physicians relying heavily on radiologists for disease diagnosis. However, many hospitals face a shortage of radiologists, leading to long queues at radiology centers and delays in diagnosis. Advances in artificial intelligence (AI) [...] Read more.
Medical imaging analysis plays a pivotal role in modern healthcare, with physicians relying heavily on radiologists for disease diagnosis. However, many hospitals face a shortage of radiologists, leading to long queues at radiology centers and delays in diagnosis. Advances in artificial intelligence (AI) have made it possible for AI models to analyze medical images and provide insights similar to those of radiologists. Despite their successes, these models face significant challenges that hinder widespread adoption. One major issue is the inability of AI models to generalize data from new populations, as performance tends to degrade when evaluated on datasets with different or shifted distributions, a problem known as domain shift. Additionally, the large size of these models requires substantial computational resources for training and deployment. In this study, we address these challenges by investigating domain shifts using ChestXray-14 and a Nigerian chest X-ray dataset. We propose a multi-task learning (MTL) approach that jointly trains the model on both datasets for two tasks, classification and segmentation, to minimize the domain gap. Furthermore, we replace traditional convolutional layers in the backbone model (Densenet-201) architecture with depthwise separable convolutions, reducing the model’s number of parameters and computational requirements. Our proposed model demonstrated remarkable improvements in both accuracy and AUC, achieving 93% accuracy and 96% AUC when tested across both datasets, significantly outperforming traditional transfer learning methods. Full article
Show Figures

Figure 1

26 pages, 5268 KB  
Article
Blurred Lesion Image Segmentation via an Adaptive Scale Thresholding Network
by Qi Chen, Wenmin Wang, Zhibing Wang, Haomei Jia and Minglu Zhao
Appl. Sci. 2025, 15(17), 9259; https://doi.org/10.3390/app15179259 (registering DOI) - 22 Aug 2025
Viewed by 102
Abstract
Medical image segmentation is crucial for disease diagnosis, as precise results aid clinicians in locating lesion regions. However, lesions often have blurred boundaries and complex shapes, challenging traditional methods in capturing clear edges and impacting accurate localization and complete excision. Small lesions are [...] Read more.
Medical image segmentation is crucial for disease diagnosis, as precise results aid clinicians in locating lesion regions. However, lesions often have blurred boundaries and complex shapes, challenging traditional methods in capturing clear edges and impacting accurate localization and complete excision. Small lesions are also critical but prone to detail loss during downsampling, reducing segmentation accuracy. To address these issues, we propose a novel adaptive scale thresholding network (AdSTNet) that acts as a post-processing lightweight network for enhancing sensitivity to lesion edges and cores through a dual-threshold adaptive mechanism. The dual-threshold adaptive mechanism is a key architectural component that includes a main threshold map for core localization and an edge threshold map for more precise boundary detection. AdSTNet is compatible with any segmentation network and introduces only a small computational and parameter cost. Additionally, Spatial Attention and Channel Attention (SACA), the Laplacian operator, and the Fusion Enhancement module are introduced to improve feature processing. SACA enhances spatial and channel attention for core localization; the Laplacian operator retains edge details without added complexity; and the Fusion Enhancement module adapts concatenation operation and Convolutional Gated Linear Unit (ConvGLU) to improve feature intensities to improve edge and small lesion segmentation. Experiments show that AdSTNet achieves notable performance gains on ISIC 2018, BUSI, and Kvasir-SEG datasets. Compared with the original U-Net, our method attains mIoU/mDice of 83.40%/90.24% on ISIC, 71.66%/80.32% on BUSI, and 73.08%/81.91% on Kvasir-SEG. Moreover, similar improvements are observed in the rest of the networks. Full article
Show Figures

Figure 1

15 pages, 622 KB  
Review
Artificial Intelligence in the Diagnosis and Imaging-Based Assessment of Pelvic Organ Prolapse: A Scoping Review
by Marian Botoncea, Călin Molnar, Vlad Olimpiu Butiurca, Cosmin Lucian Nicolescu and Claudiu Molnar-Varlam
Medicina 2025, 61(8), 1497; https://doi.org/10.3390/medicina61081497 - 21 Aug 2025
Viewed by 172
Abstract
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping [...] Read more.
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping review aims to synthesize current evidence on the use of AI in the imaging-based diagnosis and anatomical evaluation of POP. Materials and Methods: Following the PRISMA-ScR guidelines, a comprehensive search was conducted in PubMed, Scopus, and Web of Science for studies published between January 2020 and April 2025. Studies were included if they applied AI methodologies, such as convolutional neural networks (CNNs), vision transformers (ViTs), or hybrid models, to diagnostic imaging modalities such as ultrasound and magnetic resonance imaging (MRI) to women with POP. Results: Eight studies met the inclusion criteria. In these studies, AI technologies were applied to 2D/3D ultrasound and static or stress MRI for segmentation, anatomical landmark localization, and prolapse classification. CNNs were the most commonly used models, often combined with transfer learning. Some studies used hybrid models of ViTs, demonstrating high diagnostic accuracy. However, all studies relied on internal datasets, with limited model interpretability and no external validation. Moreover, clinical deployment and outcome assessments remain underexplored. Conclusions: AI shows promise in enhancing POP diagnosis through improved image analysis, but current applications are largely exploratory. Future work should prioritize external validation, standardization, explainable AI, and real-world implementation to bridge the gap between experimental models and clinical utility. Full article
(This article belongs to the Section Obstetrics and Gynecology)
Show Figures

Graphical abstract

24 pages, 2959 KB  
Article
From Detection to Diagnosis: An Advanced Transfer Learning Pipeline Using YOLO11 with Morphological Post-Processing for Brain Tumor Analysis for MRI Images
by Ikram Chourib
J. Imaging 2025, 11(8), 282; https://doi.org/10.3390/jimaging11080282 - 21 Aug 2025
Viewed by 362
Abstract
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present [...] Read more.
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation—including geometric transformations—to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high mAP@0.5 scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

16 pages, 1422 KB  
Article
Prototype-Guided Promptable Retinal Lesion Segmentation from Coarse Annotations
by Qinji Yu and Xiaowei Ding
Electronics 2025, 14(16), 3252; https://doi.org/10.3390/electronics14163252 - 15 Aug 2025
Viewed by 244
Abstract
Accurate segmentation of retinal lesions is critical for the diagnosis and management of ophthalmic diseases, but pixel-level annotation is labor-intensive and demanding in clinical scenarios. To address this, we introduce a promptable segmentation approach based on prototype learning that enables precise retinal lesion [...] Read more.
Accurate segmentation of retinal lesions is critical for the diagnosis and management of ophthalmic diseases, but pixel-level annotation is labor-intensive and demanding in clinical scenarios. To address this, we introduce a promptable segmentation approach based on prototype learning that enables precise retinal lesion segmentation from low-cost, coarse annotations. Our framework treats clinician-provided coarse masks (such as ellipses) as prompts to guide the extraction and refinement of lesion and background feature prototypes. A lightweight U-Net backbone fuses image content with spatial priors, while a superpixel-guided prototype weighting module is employed to mitigate background interference within coarse prompts. We simulate coarse prompts from fine-grained masks to train the model, and extensively validate our method across three datasets (IDRiD, DDR, and a private clinical set) with a range of annotation coarseness levels. Experimental results demonstrate that our prototype-based model significantly outperforms fully supervised and non-prototypical promptable baselines, achieving more accurate and robust segmentation, particularly for challenging and variable lesions. The approach exhibits excellent adaptability to unseen data distributions and lesion types, maintaining stable performance even under highly coarse prompts. This work highlights the potential of prompt-driven, prototype-based solutions for efficient and reliable medical image segmentation in practical clinical settings. Full article
(This article belongs to the Special Issue AI-Driven Medical Image/Video Processing)
Show Figures

Figure 1

13 pages, 3382 KB  
Article
Development of a Personalized and Low-Cost 3D-Printed Liver Model for Preoperative Planning of Hepatic Resections
by Badreddine Labakoum, Amr Farhan, Hamid El malali, Azeddine Mouhsen and Aissam Lyazidi
Appl. Sci. 2025, 15(16), 9033; https://doi.org/10.3390/app15169033 - 15 Aug 2025
Viewed by 408
Abstract
Three-dimensional (3D) printing offers new opportunities in surgical planning and medical education, yet high costs and technological complexity often limit its widespread use, especially in low-resource settings. This study presents a personalized, cost-effective, and anatomically accurate liver model designed using open-source tools and [...] Read more.
Three-dimensional (3D) printing offers new opportunities in surgical planning and medical education, yet high costs and technological complexity often limit its widespread use, especially in low-resource settings. This study presents a personalized, cost-effective, and anatomically accurate liver model designed using open-source tools and affordable 3D-printing techniques. Segmentation of hepatic CT images was performed in 3D Slicer using a region-growing method, and the resulting models were optimized and exported as STL files. The external mold was printed with Fused Deposition Modeling (FDM) using PLA+, while internal structures such as vessels and tumors were fabricated via Liquid Crystal Display (LCD) printing using PLA Pro resin. The final assembly was cast in food-grade gelatin to mimic liver tissue texture. The complete model was produced for under USD 50, with an average total production time of under 128 h. An exploratory pedagogical evaluation with five medical trainees yielded high Likert scores for anatomical understanding (4.6), spatial awareness (4.4), planning confidence (4.2), and realism (4.4). This model demonstrated utility in preoperative discussions and training simulations. The proposed workflow enables the fabrication of low-cost, realistic hepatic phantoms suitable for education and surgical rehearsal, promoting the integration of 3D printing into everyday clinical practice. Full article
Show Figures

Figure 1

13 pages, 1445 KB  
Article
Evaluating Simplified IVIM Diffusion Imaging for Breast Cancer Diagnosis and Pathological Correlation
by Abdullah Hussain Abujamea, Salma Abdulrahman Salem, Hend Samir Ibrahim, Manal Ahmed ElRefaei, Areej Saud Aloufi, Abdulmajeed Alotabibi, Salman Mohammed Albeshan and Fatma Eliraqi
Diagnostics 2025, 15(16), 2033; https://doi.org/10.3390/diagnostics15162033 - 14 Aug 2025
Viewed by 387
Abstract
Background/Objectives: This study aimed to evaluate the diagnostic performance of simplified intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) parameters in distinguishing malignant from benign breast lesions, and to explore their association with clinicopathological features. Methods: This retrospective study included 108 women who underwent [...] Read more.
Background/Objectives: This study aimed to evaluate the diagnostic performance of simplified intravoxel incoherent motion (IVIM) diffusion-weighted imaging (DWI) parameters in distinguishing malignant from benign breast lesions, and to explore their association with clinicopathological features. Methods: This retrospective study included 108 women who underwent breast MRI with multi-b-value DWI (0, 20, 200, 500, 800 s/mm2). Of those 108 women, 73 had pathologically confirmed malignant lesions. IVIM maps (ADC_map, D, D*, and perfusion fraction f) were generated using IB-Diffusion™ software version 21.12. Lesions were manually segmented by radiologists, and clinicopathological data including receptor status, Ki-67 index, cancer type, histologic grade, and molecular subtype were extracted from medical records. Nonparametric tests and ROC analysis were used to assess group differences and diagnostic performance. Additionally, a binary logistic regression model combining D, D*, and f was developed to evaluate their joint diagnostic utility, with ROC analysis applied to the model’s predicted probabilities. Results: Malignant lesions demonstrated significantly lower diffusion parameters compared to benign lesions, including ADC_map (p = 0.004), D (p = 0.009), and D* (p = 0.016), indicating restricted diffusion in cancerous tissue. In contrast, the perfusion fraction (f) did not show a significant difference (p = 0.202). ROC analysis revealed moderate diagnostic accuracy for ADC_map (AUC = 0.671), D (AUC = 0.657), and D* (AUC = 0.644), while f showed poor discrimination (AUC = 0.576, p = 0.186). A combined logistic regression model using D, D*, and f significantly improved diagnostic performance, achieving an AUC of 0.725 (p < 0.001), with 67.1% sensitivity and 74.3% specificity. ADC_map achieved the highest sensitivity (100%) but had low specificity (11.4%). Among clinicopathological features, only histologic grade was significantly associated with IVIM metrics, with higher-grade tumors showing lower ADC_map and D* values (p = 0.042 and p = 0.046, respectively). No significant associations were found between IVIM parameters and ER, PR, HER2 status, Ki-67 index, cancer type, or molecular subtype. Conclusions: Simplified IVIM DWI offers moderate accuracy in distinguishing malignant from benign breast lesions, with diffusion-related parameters (ADC_map, D, D*) showing the strongest diagnostic value. Incorporating D, D*, and f into a combined model enhanced diagnostic performance compared to individual IVIM metrics, supporting the potential of multivariate IVIM analysis in breast lesion characterization. Tumor grade was the only clinicopathological feature consistently associated with diffusion metrics, suggesting that IVIM may reflect underlying tumor differentiation but has limited utility for molecular subtype classification. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

21 pages, 5025 KB  
Article
Cascaded Self-Supervision to Advance Cardiac MRI Segmentation in Low-Data Regimes
by Martin Urschler, Elisabeth Rechberger, Franz Thaler and Darko Štern
Bioengineering 2025, 12(8), 872; https://doi.org/10.3390/bioengineering12080872 - 12 Aug 2025
Viewed by 519
Abstract
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical [...] Read more.
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical image segmentation, where annotations are required on a pixel level and often in 3D. As a result, available labeled training data and consequently performance is often limited. Frequently, however, additional unlabeled data are available and can be readily integrated into model training, paving the way for semi- or self-supervised learning (SSL). In this work, we investigate popular SSL strategies in more detail, namely Transformation Consistency, Student–Teacher and Pseudo-Labeling, as well as exhaustive combinations thereof. We comprehensively evaluate these methods on two 2D and 3D cardiac Magnetic Resonance datasets (ACDC, MMWHS) for which several different multi-compartment segmentation labels are available. To assess performance in limited dataset scenarios, different setups with a decreasing amount of patients in the labeled dataset are investigated. We identify cascaded Self-Supervision as the best methodology, where we propose to employ Pseudo-Labeling and a self-supervised cascaded Student–Teacher model simultaneously. Our evaluation shows that in all scenarios, all investigated SSL methods outperform the respective low-data supervised baseline as well as state-of-the-art self-supervised approaches. This is most prominent in the very-low-labeled data regime, where for our proposed method we demonstrate 10.17% and 6.72% improvement in Dice Similarity Coefficient (DSC) for ACDC and MMWHS, respectively, compared with the low-data supervised approach, as well as 2.47% and 7.64% DSC improvement, respectively, when compared with related work. Moreover, in most experiments, our proposed method is able to greatly decrease the performance gap when compared to the fully supervised scenario, where all available labeled samples are used. We conclude that it is always beneficial to incorporate unlabeled data in cardiac MRI segmentation whenever it is present. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

24 pages, 94333 KB  
Article
Medical Segmentation of Kidney Whole Slide Images Using Slicing Aided Hyper Inference and Enhanced Syncretic Mask Merging Optimized by Particle Swarm Metaheuristics
by Marko Mihajlovic and Marina Marjanovic
BioMedInformatics 2025, 5(3), 44; https://doi.org/10.3390/biomedinformatics5030044 - 11 Aug 2025
Viewed by 327
Abstract
Accurate segmentation of kidney microstructures in whole slide images (WSIs) is essential for the diagnosis and monitoring of renal diseases. In this study, an end-to-end instance segmentation pipeline was developed for the detection of glomeruli and blood vessels in hematoxylin and eosin (H&E) [...] Read more.
Accurate segmentation of kidney microstructures in whole slide images (WSIs) is essential for the diagnosis and monitoring of renal diseases. In this study, an end-to-end instance segmentation pipeline was developed for the detection of glomeruli and blood vessels in hematoxylin and eosin (H&E) stained kidney tissue. A tiling-based strategy was employed using Slicing Aided Hyper Inference (SAHI) to manage the resolution and scale of WSIs and the performance of two segmentation models, YOLOv11 and YOLOv12, was comparatively evaluated. The influence of tile overlap ratios on segmentation quality and inference efficiency was assessed, with configurations identified that balance object continuity and computational cost. To address object fragmentation at tile boundaries, an Enhanced Syncretic Mask Merging algorithm was introduced, incorporating morphological and spatial constraints. The algorithm’s hyperparameters were optimized using Particle Swarm Optimization (PSO), with vessel and glomerulus-specific performance targets. The optimization process revealed key parameters affecting segmentation quality, particularly for vessel structures with fine, elongated morphology. When compared with a baseline without postprocessing, improvements in segmentation precision were observed, notably a 48% average increase for glomeruli and up to 17% for blood vessels. The proposed framework demonstrates a balance between accuracy and efficiency, supporting scalable histopathology analysis and contributing to the Vasculature Common Coordinate Framework (VCCF) and Human Reference Atlas (HRA). Full article
Show Figures

Figure 1

24 pages, 948 KB  
Review
A Review on Deep Learning Methods for Glioma Segmentation, Limitations, and Future Perspectives
by Cecilia Diana-Albelda, Álvaro García-Martín and Jesus Bescos
J. Imaging 2025, 11(8), 269; https://doi.org/10.3390/jimaging11080269 - 11 Aug 2025
Viewed by 604
Abstract
Accurate and automated segmentation of gliomas from Magnetic Resonance Imaging (MRI) is crucial for effective diagnosis, treatment planning, and patient monitoring. However, the aggressive nature and morphological complexity of these tumors pose significant challenges that call for advanced segmentation techniques. This review provides [...] Read more.
Accurate and automated segmentation of gliomas from Magnetic Resonance Imaging (MRI) is crucial for effective diagnosis, treatment planning, and patient monitoring. However, the aggressive nature and morphological complexity of these tumors pose significant challenges that call for advanced segmentation techniques. This review provides a comprehensive analysis of Deep Learning (DL) methods for glioma segmentation, with a specific focus on bridging the gap between research performance and practical clinical deployment. We evaluate over 80 state-of-the-art models published up to 2025, categorizing them into CNN-based, Pure Transformer, and Hybrid CNN-Transformer architectures. The primary objective of this paper is to critically assess these models not only on their segmentation accuracy but also on their computational efficiency and suitability for real-world medical environments by incorporating hardware resource considerations. We present a comparison of model performance on the BraTS datasets benchmark and introduce a suitability analysis for top-performing models based on their robustness, efficiency, and completeness of tumor region delineation. By identifying current trends, limitations, and key trade-offs, this review offers future research directions aimed at optimizing the balance between technical performance and clinical usability to improve diagnostic outcomes for glioma patients. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

19 pages, 7650 KB  
Article
Lightweight Mamba Model for 3D Tumor Segmentation in Automated Breast Ultrasounds
by JongNam Kim, Jun Kim, Fayaz Ali Dharejo, Zeeshan Abbas and Seung Won Lee
Mathematics 2025, 13(16), 2553; https://doi.org/10.3390/math13162553 - 9 Aug 2025
Viewed by 351
Abstract
Background: Recently, the adoption of AI-based technologies has been accelerating in the field of medical image analysis. For the early diagnosis and treatment planning of breast cancer, Automated Breast Ultrasound (ABUS) has emerged as a safe and non-invasive imaging method, especially for [...] Read more.
Background: Recently, the adoption of AI-based technologies has been accelerating in the field of medical image analysis. For the early diagnosis and treatment planning of breast cancer, Automated Breast Ultrasound (ABUS) has emerged as a safe and non-invasive imaging method, especially for women with dense breasts. However, the increasing computational cost due to the minute size and complexity of 3D ABUS data remains a major challenge. Methods: In this study, we propose a novel model based on the Mamba state–space model architecture for 3D tumor segmentation in ABUS images. The model uses Mamba blocks to effectively capture the volumetric spatial features of tumors, and integrates a deep spatial pyramid pooling (DASPP) module to extract multiscale contextual information from lesions of different sizes. Results: On the TDSC-2023 ABUS dataset, the proposed model achieved a Dice Similarity Coefficient (DSC) of 0.8062, and Intersection over Union (IoU) of 0.6831, using only 3.08 million parameters. Conclusions: These results show that the proposed model improves the performance of tumor segmentation in ABUS, offering both diagnostic precision and computational efficiency. The reduced computational space suggests a strong potential for real-world medical applications, where accurate early diagnosis can reduce costs and improve patient survival. Full article
Show Figures

Figure 1

19 pages, 7531 KB  
Article
Evaluating the Impact of 2D MRI Slice Orientation and Location on Alzheimer’s Disease Diagnosis Using a Lightweight Convolutional Neural Network
by Nadia A. Mohsin and Mohammed H. Abdulameer
J. Imaging 2025, 11(8), 260; https://doi.org/10.3390/jimaging11080260 - 5 Aug 2025
Viewed by 435
Abstract
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative [...] Read more.
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative combination of MRI slice orientation and anatomical location for AD classification. We propose an automated framework that first selects the most relevant slices using a feature entropy-based method applied to activation maps from a pretrained CNN model. For classification, we employ a lightweight CNN architecture based on depthwise separable convolutions to efficiently analyze the selected 2D MRI slices extracted from preprocessed 3D brain scans. To further interpret model behavior, an attention mechanism is integrated to analyze which feature level contributes the most to the classification process. The model is evaluated on three binary tasks: AD vs. mild cognitive impairment (MCI), AD vs. cognitively normal (CN), and MCI vs. CN. The experimental results show the highest accuracy (97.4%) in distinguishing AD from CN when utilizing the selected slices from the ninth axial segment, followed by the tenth segment of coronal and sagittal orientations. These findings demonstrate the significance of slice location and orientation in MRI-based AD diagnosis and highlight the potential of lightweight CNNs for clinical use. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

26 pages, 18131 KB  
Article
MINTFormer: Multi-Scale Information Aggregation with CSWin Vision Transformer for Medical Image Segmentation
by Chao Deng and Xiao Qin
Appl. Sci. 2025, 15(15), 8626; https://doi.org/10.3390/app15158626 - 4 Aug 2025
Viewed by 469
Abstract
Transformers have been extensively utilized as encoders in medical image segmentation; however, the information that an encoder can capture is inherently limited. In this study, we propose MINTFormer, which introduces a Heterogeneous encoder that integratesCSWin and MaxViT to fully exploit the potential of [...] Read more.
Transformers have been extensively utilized as encoders in medical image segmentation; however, the information that an encoder can capture is inherently limited. In this study, we propose MINTFormer, which introduces a Heterogeneous encoder that integratesCSWin and MaxViT to fully exploit the potential of encoders with different encoding methodologies. Additionally, we observed that the encoder output contains substantial redundant information. To address this, we designed a Demodulate Bridge (DB) to filter out redundant information from feature maps. Furthermore, we developed a multi-Scale Sampling Decoder (SSD) capable of preserving information about organs of varying sizes during upsampling and accurately restoring their shapes. This study demonstrates the superior performance of MINTFormer across several datasets, including Synapse, ACDC, Kvasir-SEG, and skin lesion segmentation datasets. Full article
(This article belongs to the Special Issue AI-Based Biomedical Signal and Image Processing)
Show Figures

Figure 1

23 pages, 3004 KB  
Article
An Ensemble Learning for Automatic Stroke Lesion Segmentation Using Compressive Sensing and Multi-Resolution U-Net
by Mohammad Emami, Mohammad Ali Tinati, Javad Musevi Niya and Sebelan Danishvar
Biomimetics 2025, 10(8), 509; https://doi.org/10.3390/biomimetics10080509 - 4 Aug 2025
Viewed by 479
Abstract
A stroke is a critical medical condition and one of the leading causes of death among humans. Segmentation of the lesions of the brain in which the blood flow is impeded because of blood coagulation plays a vital role in drug prescription and [...] Read more.
A stroke is a critical medical condition and one of the leading causes of death among humans. Segmentation of the lesions of the brain in which the blood flow is impeded because of blood coagulation plays a vital role in drug prescription and medical diagnosis. Computed tomography (CT) scans play a crucial role in detecting abnormal tissue. There are several methods for segmenting medical images that utilize the main images without considering the patient’s privacy information. In this paper, a deep network is proposed that utilizes compressive sensing and ensemble learning to protect patient privacy and segment the dataset efficiently. The compressed version of the input CT images from the ISLES challenge 2018 dataset is applied to the ensemble part of the proposed network, which consists of two multi-resolution modified U-shaped networks. The evaluation metrics of accuracy, specificity, and dice coefficient are 92.43%, 91.3%, and 91.83%, respectively. The comparison to the state-of-the-art methods confirms the efficiency of the proposed compressive sensing-based ensemble net (CS-Ensemble Net). The compressive sensing part provides information privacy, and the parallel ensemble learning produces better results. Full article
Show Figures

Figure 1

Back to TopTop