Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (206)

Search Parameters:
Keywords = dice similarity coefficient (DSC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 955 KiB  
Article
Single-Center Preliminary Experience Treating Endometrial Cancer Patients with Fiducial Markers
by Francesca Titone, Eugenia Moretti, Alice Poli, Marika Guernieri, Sarah Bassi, Claudio Foti, Martina Arcieri, Gianluca Vullo, Giuseppe Facondo, Marco Trovò, Pantaleo Greco, Gabriella Macchia, Giuseppe Vizzielli and Stefano Restaino
Life 2025, 15(8), 1218; https://doi.org/10.3390/life15081218 - 1 Aug 2025
Viewed by 186
Abstract
Purpose: To present the findings of our preliminary experience using daily image-guided radiotherapy (IGRT) supported by implanted fiducial markers (FMs) in the radiotherapy of the vaginal cuff, in a cohort of post-surgery endometrial cancer patients. Methods: Patients with vaginal cuff cancer [...] Read more.
Purpose: To present the findings of our preliminary experience using daily image-guided radiotherapy (IGRT) supported by implanted fiducial markers (FMs) in the radiotherapy of the vaginal cuff, in a cohort of post-surgery endometrial cancer patients. Methods: Patients with vaginal cuff cancer requiring adjuvant radiation with external beams were enrolled. Five patients underwent radiation therapy targeting the pelvic disease and positive lymph nodes, with doses of 50.4 Gy in twenty-eight fractions and a subsequent stereotactic boost on the vaginal vault at a dose of 5 Gy in a single fraction. One patient was administered 30 Gy in five fractions to the vaginal vault. These patients underwent external beam RT following the implantation of three 0.40 × 10 mm gold fiducial markers (FMs). Our IGRT strategy involved real-time 2D kV image-based monitoring of the fiducial markers during the treatment delivery as a surrogate of the vaginal cuff. To explore the potential role of FMs throughout the treatment process, we analyzed cine movies of the 2D kV-triggered images during delivery, as well as the image registration between pre- and post-treatment CBCT scans and the planning CT (pCT). Each CBCT used to trigger fraction delivery was segmented to define the rectum, bladder, and vaginal cuff. We calculated a standard metric to assess the similarity among the images (Dice index). Results: All the patients completed radiotherapy and experienced good tolerance without any reported acute or long-term toxicity. We did not observe any loss of FMs during or before treatment. A total of twenty CBCTs were analyzed across ten fractions. The observed trend showed a relatively emptier bladder compared to the simulation phase, with the bladder filling during the delivery. This resulted in a final median Dice similarity coefficient (DSC) of 0.90, indicating strong performance. The rectum reproducibility revealed greater variability, negatively affecting the quality of the delivery. Only in two patients, FMs showed intrafractional shift > 5 mm, probably associated with considerable rectal volume changes. Target coverage was preserved due to a safe CTV-to-PTV margin (10 mm). Conclusions: In our preliminary study, CBCT in combination with the use of fiducial markers to guide the delivery proved to be a feasible method for IGRT both before and during the treatment of post-operative gynecological cancer. In particular, this approach seems to be promising in selected patients to facilitate the use of SBRT instead of BRT (brachytherapy), thanks to margin reduction and adaptive strategies to optimize dose delivery while minimizing toxicity. A larger sample of patients is needed to confirm our results. Full article
Show Figures

Figure 1

14 pages, 1617 KiB  
Article
Multi-Label Conditioned Diffusion for Cardiac MR Image Augmentation and Segmentation
by Jianyang Li, Xin Ma and Yonghong Shi
Bioengineering 2025, 12(8), 812; https://doi.org/10.3390/bioengineering12080812 - 28 Jul 2025
Viewed by 342
Abstract
Accurate segmentation of cardiac MR images using deep neural networks is crucial for cardiac disease diagnosis and treatment planning, as it provides quantitative insights into heart anatomy and function. However, achieving high segmentation accuracy relies heavily on extensive, precisely annotated datasets, which are [...] Read more.
Accurate segmentation of cardiac MR images using deep neural networks is crucial for cardiac disease diagnosis and treatment planning, as it provides quantitative insights into heart anatomy and function. However, achieving high segmentation accuracy relies heavily on extensive, precisely annotated datasets, which are costly and time-consuming to obtain. This study addresses this challenge by proposing a novel data augmentation framework based on a condition-guided diffusion generative model, controlled by multiple cardiac labels. The framework aims to expand annotated cardiac MR datasets and significantly improve the performance of downstream cardiac segmentation tasks. The proposed generative data augmentation framework operates in two stages. First, a Label Diffusion Module is trained to unconditionally generate realistic multi-category spatial masks (encompassing regions such as the left ventricle, interventricular septum, and right ventricle) conforming to anatomical prior probabilities derived from noise. Second, cardiac MR images are generated conditioned on these semantic masks, ensuring a precise one-to-one mapping between synthetic labels and images through the integration of a spatially-adaptive normalization (SPADE) module for structural constraint during conditional model training. The effectiveness of this augmentation strategy is demonstrated using the U-Net model for segmentation on the enhanced 2D cardiac image dataset derived from the M&M Challenge. Results indicate that the proposed method effectively increases dataset sample numbers and significantly improves cardiac segmentation accuracy, achieving a 5% to 10% higher Dice Similarity Coefficient (DSC) compared to traditional data augmentation methods. Experiments further reveal a strong correlation between image generation quality and augmentation effectiveness. This framework offers a robust solution for data scarcity in cardiac image analysis, directly benefiting clinical applications. Full article
Show Figures

Figure 1

21 pages, 5527 KiB  
Article
SGNet: A Structure-Guided Network with Dual-Domain Boundary Enhancement and Semantic Fusion for Skin Lesion Segmentation
by Haijiao Yun, Qingyu Du, Ziqing Han, Mingjing Li, Le Yang, Xinyang Liu, Chao Wang and Weitian Ma
Sensors 2025, 25(15), 4652; https://doi.org/10.3390/s25154652 - 27 Jul 2025
Viewed by 317
Abstract
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based [...] Read more.
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN–Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet’s superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet’s exceptional accuracy and robust generalization for computer-aided dermatological diagnosis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 3888 KiB  
Article
Deep Learning-Based Algorithm for the Classification of Left Ventricle Segments by Hypertrophy Severity
by Wafa Baccouch, Bilel Hasnaoui, Narjes Benameur, Abderrazak Jemai, Dhaker Lahidheb and Salam Labidi
J. Imaging 2025, 11(7), 244; https://doi.org/10.3390/jimaging11070244 - 20 Jul 2025
Viewed by 375
Abstract
In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to [...] Read more.
In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to hypertrophy severity using a deep learning-based algorithm. The proposed method was validated on 133 subjects, including both healthy individuals and patients with LVH. The process starts with automatic LV segmentation using U-Net and the segmentation of the left ventricle cavity based on the American Heart Association (AHA) standards, followed by the division of each segment into three equal sub-segments. Then, an automated quantification of regional wall thickness (RWT) was performed. Finally, a convolutional neural network (CNN) was developed to classify each myocardial sub-segment according to hypertrophy severity. The proposed approach demonstrates strong performance in contour segmentation, achieving a Dice Similarity Coefficient (DSC) of 98.47% and a Hausdorff Distance (HD) of 6.345 ± 3.5 mm. For thickness quantification, it reaches a minimal mean absolute error (MAE) of 1.01 ± 1.16. Regarding segment classification, it achieves competitive performance metrics compared to state-of-the-art methods with an accuracy of 98.19%, a precision of 98.27%, a recall of 99.13%, and an F1-score of 98.7%. The obtained results confirm the high performance of the proposed method and highlight its clinical utility in accurately assessing and classifying cardiac hypertrophy. This approach provides valuable insights that can guide clinical decision-making and improve patient management strategies. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

18 pages, 1995 KiB  
Article
A U-Shaped Architecture Based on Hybrid CNN and Mamba for Medical Image Segmentation
by Xiaoxuan Ma, Yingao Du and Dong Sui
Appl. Sci. 2025, 15(14), 7821; https://doi.org/10.3390/app15147821 - 11 Jul 2025
Viewed by 479
Abstract
Accurate medical image segmentation plays a critical role in clinical diagnosis, treatment planning, and a wide range of healthcare applications. Although U-shaped CNNs and Transformer-based architectures have shown promise, CNNs struggle to capture long-range dependencies, whereas Transformers suffer from quadratic growth in computational [...] Read more.
Accurate medical image segmentation plays a critical role in clinical diagnosis, treatment planning, and a wide range of healthcare applications. Although U-shaped CNNs and Transformer-based architectures have shown promise, CNNs struggle to capture long-range dependencies, whereas Transformers suffer from quadratic growth in computational cost as image resolution increases. To address these issues, we propose HCMUNet, a novel medical image segmentation model that innovatively combines the local feature extraction capabilities of CNNs with the efficient long-range dependency modeling of Mamba, enhancing feature representation while reducing computational cost. In addition, HCMUNet features a redesigned skip connection and a novel attention module that integrates multi-scale features to recover spatial details lost during down-sampling and to promote richer cross-dimensional interactions. HCMUNet achieves Dice Similarity Coefficients (DSC) of 90.32%, 81.52%, and 92.11% on the ISIC 2018, Synapse multi-organ, and ACDC datasets, respectively, outperforming baseline methods by 0.65%, 1.05%, and 1.39%. Furthermore, HCMUNet consistently outperforms U-Net and Swin-UNet, achieving average Dice score improvements of approximately 5% and 2% across the evaluated datasets. These results collectively affirm the effectiveness and reliability of the proposed model across different segmentation tasks. Full article
Show Figures

Figure 1

19 pages, 6704 KiB  
Article
AI-Assisted Image Recognition of Cervical Spine Vertebrae in Dynamic X-Ray Recordings
by Esther van Santbrink, Valérie Schuermans, Esmée Cerfonteijn, Marcel Breeuwer, Anouk Smeets, Henk van Santbrink and Toon Boselie
Bioengineering 2025, 12(7), 679; https://doi.org/10.3390/bioengineering12070679 - 20 Jun 2025
Viewed by 565
Abstract
Background: Qualitative motion analysis revealed that the cervical spine moves according to a consistent pattern. This analysis calculates the relative rotation between vertebral segments to determine the sequence in which they contribute to extension, demonstrating a mean sensitivity of 90% and specificity of [...] Read more.
Background: Qualitative motion analysis revealed that the cervical spine moves according to a consistent pattern. This analysis calculates the relative rotation between vertebral segments to determine the sequence in which they contribute to extension, demonstrating a mean sensitivity of 90% and specificity of 85%. However, the extensive time that is required limits its applicability. This study investigated the feasibility of implementing a deep-learning model to analyze qualitative cervical motion. Methods: A U-Net architecture was implemented as 2D and 2D+t models. Dice similarity coefficient (DSC) and Intersection over Union (IoU) were used to assess the performance of the models. Intraclass Correlation Coefficient (ICC) was used to compare the relative rotation of individual vertebrae and segments to the ground truth. Results: IoU ranged from 0.37 to 0.74 and DSC ranged from 0.53 to 0.80. The ICC scores for relative rotation ranged from 0.62 to 0.96 for individual vertebrae and from 0.28 to 0.72 for vertebral segments. For segments, 2D+t models presented higher ICC scores compared to 2D models. Conclusions: This study showed the feasibility of implementing deep-learning models to analyze qualitative cervical motion in dynamic X-ray recordings. Future research should focus on improving model segmentation by enhancing recording contrast and applying post-processing methods. Improved segmentation accuracy will enable routine use of the analysis of motion patterns in clinical research. The absence or presence of a motion pattern, or identification of new patterns has the potential to aid in clinical decision-making. Full article
(This article belongs to the Special Issue Spine Biomechanics)
Show Figures

Figure 1

13 pages, 1178 KiB  
Article
Retrospective Evaluation of Baseline Amino Acid PET for Identifying Future Regions of Tumor Recurrence in High-Grade Glioma Patients
by Dylan Henssen, Michael Rullmann, Andreas Schildan, Stephan Striepe, Matti Schürer, Paola Feraco, Cordula Scherlach, Katja Jähne, Ruth Stassart, Osama Sabri, Clemens Seidel and Swen Hesse
Cancers 2025, 17(12), 1986; https://doi.org/10.3390/cancers17121986 - 14 Jun 2025
Viewed by 445
Abstract
Background/Objectives: Positron emission tomography (PET) imaging with radiolabeled amino acids is increasingly used in glioma patients for biopsy planning, tumor delineation, prognostication, and therapy response assessment. This study investigated whether baseline amino acid PET imaging could identify regions at risk of future tumor [...] Read more.
Background/Objectives: Positron emission tomography (PET) imaging with radiolabeled amino acids is increasingly used in glioma patients for biopsy planning, tumor delineation, prognostication, and therapy response assessment. This study investigated whether baseline amino acid PET imaging could identify regions at risk of future tumor recurrence. Methods: Retrospective case series of 14 patients with high-grade glioma. Contrast-enhanced magnetic resonance imaging (MRI) data of tumor recurrence and baseline imaging (PET-MRI) were co-registered. Volumes of interest (VOIs) of the high-grade glioma were derived from contrast-enhanced MRI at baseline and follow-up and from amino acid PET at baseline. The Dice similarity coefficient (DSC) was used to assess the overlap between VOIs. Furthermore, dynamic and static PET parameters were compared between the VOIs derived from contrast-enhanced MRI at follow-up and from the region of increased amino acid transport at baseline. Results: Regions of tumor recurrence in high-grade glioma patients overlap significantly more with baseline regions of increased amino acid transport on PET compared to regions of contrast enhancement on baseline MRI (p < 0.001). However, the static and dynamic PET statistics did not differentiate between regions that would later develop tumor recurrence and other areas of increased amino acid transport at baseline. Conclusions: These findings reaffirm the ability of amino acid PET to visualize the infiltrative components of gliomas not detected by contrast-enhanced MRI. Also, this study supports the role of amino acid PET in visualizing glioma infiltration beyond the MRI-visible tumor, but also indicates that accurately predicting the specific regions of recurrence based on baseline PET remains limited. Full article
Show Figures

Figure 1

15 pages, 2843 KiB  
Article
Improving the Precision of Deep-Learning-Based Head and Neck Target Auto-Segmentation by Leveraging Radiology Reports Using a Large Language Model
by Libing Zhu, Jean-Claude M. Rwigema, Xue Feng, Bilaal Ansari, Jingwei Duan, Yi Rong and Quan Chen
Cancers 2025, 17(12), 1935; https://doi.org/10.3390/cancers17121935 - 10 Jun 2025
Viewed by 563
Abstract
Background/Objectives: The accurate delineation of primary tumors (GTVp) and metastatic lymph nodes (GTVn) in head and neck (HN) cancers is essential for effective radiation treatment planning, yet remains a challenging and laborious task. This study aims to develop a deep-learning-based auto-segmentation (DLAS) [...] Read more.
Background/Objectives: The accurate delineation of primary tumors (GTVp) and metastatic lymph nodes (GTVn) in head and neck (HN) cancers is essential for effective radiation treatment planning, yet remains a challenging and laborious task. This study aims to develop a deep-learning-based auto-segmentation (DLAS) model trained on external datasets with false-positive elimination using clinical diagnosis reports. Methods: The DLAS model was trained on a multi-institutional public dataset with 882 cases. Forty-four institutional cases were randomly selected as the external testing dataset. DLAS-generated GTVp and GTVn were validated against clinical diagnosis reports to identify false-positive and false-negative segmentation errors using two large language models: ChatGPT-4 and Llama-3. False-positive ruling out was conducted by matching the centroids of AI-generated contours with the slice locations or anatomical regions described in the reports. Performance was evaluated using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), and tumor detection precision. Results: ChatGPT-4 outperformed Llama-3 in accurately extracting tumor locations from the diagnostic reports. False-positive contours were identified in 15 out of 44 cases. The DSCmean of the DLAS contours for GTVp and GTVn increased from 0.68 to 0.75 and from 0.69 to 0.75, respectively, after the ruling-out process. Notably, the average HD95 value for GTVn decreased from 18.81 mm to 5.2 mm. Post ruling out, the model achieved 100% precision for GTVp and GTVn when compared with the results of physician-determined contours. Conclusions: The false-positive ruling-out approach based on diagnostic reports effectively enhances the precision of DLAS in the HN region. The model accurately identifies the tumor location and detects all false-negative errors. Full article
Show Figures

Figure 1

17 pages, 9400 KiB  
Article
MRCA-UNet: A Multiscale Recombined Channel Attention U-Net Model for Medical Image Segmentation
by Lei Liu, Xiang Li, Shuai Wang, Jun Wang and Silas N. Melo
Symmetry 2025, 17(6), 892; https://doi.org/10.3390/sym17060892 - 6 Jun 2025
Viewed by 586
Abstract
Deep learning techniques play a crucial role in medical image segmentation for diagnostic purposes, with traditional convolutional neural networks (CNNs) and emerging transformers having achieved satisfactory results. CNN-based methods focus on extracting the local features of an image, which are beneficial for handling [...] Read more.
Deep learning techniques play a crucial role in medical image segmentation for diagnostic purposes, with traditional convolutional neural networks (CNNs) and emerging transformers having achieved satisfactory results. CNN-based methods focus on extracting the local features of an image, which are beneficial for handling image details and textural features. However, the receptive fields of CNNs are relatively small, resulting in poor performance when processing images with long-range dependencies. Conversely, transformer-based methods are effective in handling global information; however, they suffer from significant computational complexity arising from the building of long-range dependencies. Additionally, they lack the ability to perceive image details and adopt channel features. These problems can result in unclear image segmentation and blurred boundaries. Accordingly, in this study, a multiscale recombined channel attention (MRCA) module is proposed, which can simultaneously extract both global and local features and has the capability of exploring channel features during feature fusion. Specifically, the proposed MRCA first employs multibranch extraction of image features and performs operations such as blocking, shifting, and aggregating the image at different scales. This step enables the model to recognize multiscale information locally and globally. Feature selection is then performed to enhance the predictive capability of the model. Finally, features from different branches are connected and recombined across channels to complete the feature fusion. Benefiting from fully exploring the channel features, an MRCA-based U-Net (MRCA-UNet) framework is proposed for medical image segmentation. Experiments conducted on the Synapse multi-organ segmentation (Synapse) dataset and the International Skin Imaging Collaboration (ISIC-2018) dataset demonstrate the competitive segmentation performance of the proposed MRCA-UNet, achieving an average Dice Similarity Coefficient (DSC) of 81.61% and a Hausdorff Distance (HD) of 23.36 on Synapse and an Accuracy of 95.94% on ISIC-2018. Full article
Show Figures

Figure 1

16 pages, 2032 KiB  
Article
Auto-Segmentation and Auto-Planning in Automated Radiotherapy for Prostate Cancer
by Sijuan Huang, Jingheng Wu, Xi Lin, Guangyu Wang, Ting Song, Li Chen, Lecheng Jia, Qian Cao, Ruiqi Liu, Yang Liu, Xin Yang, Xiaoyan Huang and Liru He
Bioengineering 2025, 12(6), 620; https://doi.org/10.3390/bioengineering12060620 - 6 Jun 2025
Viewed by 612
Abstract
Objective: The objective of this study was to develop and assess the clinical feasibility of auto-segmentation and auto-planning methodologies for automated radiotherapy in prostate cancer. Methods: A total of 166 patients were used to train a 3D Unet model for segmentation of [...] Read more.
Objective: The objective of this study was to develop and assess the clinical feasibility of auto-segmentation and auto-planning methodologies for automated radiotherapy in prostate cancer. Methods: A total of 166 patients were used to train a 3D Unet model for segmentation of the gross tumor volume (GTV), clinical tumor volume (CTV), nodal CTV (CTVnd), and organs at risk (OARs). Performance was assessed by the Dice similarity coefficient (DSC), the Recall, Precision, Volume Ratio (VR), the 95% Hausdorff distance (HD95%), and the volumetric revision degree (VRD). An auto-planning network based on a 3D Unet was trained on 77 treatment plans derived from the 166 patients. Dosimetric differences and clinical acceptability of the auto-plans were studied. The effect of OAR editing on dosimetry was also evaluated. Results: On an independent set of 50 cases, the auto-segmentation process took 1 min 20 s per case. The DSCs for GTV, CTV, and CTVnd were 0.87, 0.88, and 0.82, respectively, with VRDs ranging from 0.09 to 0.14. The segmentation of OARs demonstrated high accuracy (DSC ≥ 0.83, Recall/Precision ≈ 1.0). The auto-planning process required 1–3 optimization iterations for 50%, 40%, and 10% of cases, respectively, and exhibited significant better conformity (p ≤ 0.01) and OAR sparing (p ≤ 0.03) while maintaining comparable target coverage. Only 6.7% of auto-plans were deemed unacceptable compared to 20% of manual plans, with 75% of auto-plans considered superior. Notably, the editing of OARs had no significant impact on doses. Conclusions: The accuracy of auto-segmentation is comparable to that of manual segmentation, and the auto-planning offers equivalent or better OAR protection, meeting the requirements of online automated radiotherapy and facilitating its clinical application. Full article
(This article belongs to the Special Issue Novel Imaging Techniques in Radiotherapy)
Show Figures

Figure 1

15 pages, 7136 KiB  
Article
Source-Free Domain Adaptation for Cross-Modality Abdominal Multi-Organ Segmentation Challenges
by Xiyu Zhang, Xu Chen, Yang Wang, Dongliang Liu and Yifeng Hong
Information 2025, 16(6), 460; https://doi.org/10.3390/info16060460 - 29 May 2025
Viewed by 430
Abstract
Abdominal organ segmentation in CT images is crucial for accurate diagnosis, treatment planning, and condition monitoring. However, the annotation process is often hindered by challenges such as low contrast, artifacts, and complex organ structures. While unsupervised domain adaptation (UDA) has shown promise in [...] Read more.
Abdominal organ segmentation in CT images is crucial for accurate diagnosis, treatment planning, and condition monitoring. However, the annotation process is often hindered by challenges such as low contrast, artifacts, and complex organ structures. While unsupervised domain adaptation (UDA) has shown promise in addressing these issues by transferring knowledge from a different modality (source domain), its reliance on both source and target data during training presents a practical challenge in many clinical settings due to data privacy concerns. This study aims to develop a cross-modality abdominal multi-organ segmentation model for label-free CT (target domain) data, leveraging knowledge solely from a pre-trained source domain (MRI) model without accessing the source data. To achieve this, we generate source-like images from target-domain images using a one-way image translation approach with the pre-trained model. These synthesized images preserve the anatomical structure of the target, enabling segmentation predictions from the pre-trained model. To further enhance segmentation accuracy, particularly for organ boundaries and small contours, we introduce an auxiliary translation module with an image decoder and multi-level discriminator. The results demonstrate significant improvements across several performance metrics, including the Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD), highlighting the effectiveness of the proposed method. Full article
Show Figures

Figure 1

32 pages, 2404 KiB  
Review
Bio-Inspired Metaheuristics in Deep Learning for Brain Tumor Segmentation: A Decade of Advances and Future Directions
by Shoffan Saifullah, Rafał Dreżewski, Anton Yudhana, Wahyu Caesarendra and Nurul Huda
Information 2025, 16(6), 456; https://doi.org/10.3390/info16060456 - 29 May 2025
Cited by 1 | Viewed by 900
Abstract
Accurate segmentation of brain tumors in magnetic resonance imaging (MRI) remains a challenging task due to heterogeneous tumor structures, varying intensities across modalities, and limited annotated data. Deep learning has significantly advanced segmentation accuracy; however, it often suffers from sensitivity to hyperparameter settings [...] Read more.
Accurate segmentation of brain tumors in magnetic resonance imaging (MRI) remains a challenging task due to heterogeneous tumor structures, varying intensities across modalities, and limited annotated data. Deep learning has significantly advanced segmentation accuracy; however, it often suffers from sensitivity to hyperparameter settings and limited generalization. To overcome these challenges, bio-inspired metaheuristic algorithms have been increasingly employed to optimize various stages of the deep learning pipeline—including hyperparameter tuning, preprocessing, architectural design, and attention modulation. This review systematically examines developments from 2015 to 2025, focusing on the integration of nature-inspired optimization methods such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Differential Evolution (DE), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and novel hybrids including CJHBA and BioSwarmNet into deep learning-based brain tumor segmentation frameworks. A structured multi-query search strategy was executed using Publish or Perish across Google Scholar and Scopus databases. Following PRISMA guidelines, 3895 records were screened through automated filtering and manual eligibility checks, yielding a curated set of 106 primary studies. Through bibliometric mapping, methodological synthesis, and performance analysis, we highlight trends in algorithm usage, application domains (e.g., preprocessing, architecture search), and segmentation outcomes measured by metrics such as Dice Similarity Coefficient (DSC), Jaccard Index (JI), Hausdorff Distance (HD), and ASSD. Our findings demonstrate that bio-inspired optimization significantly enhances segmentation accuracy and robustness, particularly in multimodal settings involving FLAIR and T1CE modalities. The review concludes by identifying emerging research directions in hybrid optimization, real-time clinical applicability, and explainable AI, providing a roadmap for future exploration in this interdisciplinary domain. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

15 pages, 1196 KiB  
Article
Bone Segmentation in Low-Field Knee MRI Using a Three-Dimensional Convolutional Neural Network
by Ciro Listone, Diego Romano and Marco Lapegna
Big Data Cogn. Comput. 2025, 9(6), 146; https://doi.org/10.3390/bdcc9060146 - 28 May 2025
Viewed by 692
Abstract
Bone segmentation in magnetic resonance imaging (MRI) is crucial for clinical and research applications, including diagnosis, surgical planning, and treatment monitoring. However, it remains challenging due to anatomical variability and complex bone morphology. Manual segmentation is time-consuming and operator-dependent, fostering interest in automated [...] Read more.
Bone segmentation in magnetic resonance imaging (MRI) is crucial for clinical and research applications, including diagnosis, surgical planning, and treatment monitoring. However, it remains challenging due to anatomical variability and complex bone morphology. Manual segmentation is time-consuming and operator-dependent, fostering interest in automated methods. This study proposes an automated segmentation method based on a 3D U-Net convolutional neural network to segment the femur, tibia, and patella from low-field MRI scans. Low-field MRI offers advantages in cost, patient comfort, and accessibility but presents challenges related to lower signal quality. Our method achieved a Dice Similarity Coefficient (DSC) of 0.9838, Intersection over Union (IoU) of 0.9682, and Average Hausdorff Distance (AHD) of 0.0223, with an inference time of approximately 3.96 s per volume on a GPU. Although post-processing had minimal impact on metrics, it significantly enhanced the visual smoothness of bone surfaces, which is crucial for clinical use. The final segmentations enabled the creation of clean, 3D-printable bone models, beneficial for preoperative planning. These results demonstrate that the model achieves accurate segmentation with a high degree of overlap compared to manually segmented reference data. This accuracy results from meticulous fine-tuning of the network, along with the application of advanced data augmentation and post-processing techniques. Full article
Show Figures

Figure 1

13 pages, 503 KiB  
Article
Deep Learning for Adrenal Gland Segmentation: Comparing Accuracy and Efficiency Across Three Convolutional Neural Network Models
by Vlad-Octavian Bolocan, Oana Nicu-Canareica, Alexandru Mitoi, Maria Glencora Costache, Loredana Sabina Cornelia Manolescu, Cosmin Medar and Viorel Jinga
Appl. Sci. 2025, 15(10), 5388; https://doi.org/10.3390/app15105388 - 12 May 2025
Viewed by 512
Abstract
Adrenal glands are vital endocrine organs whose accurate segmentation on CT imaging presents significant challenges due to their small size and variable morphology. This study evaluates the efficacy of deep learning approaches for automatic adrenal gland segmentation from multiphase CT scans. We implemented [...] Read more.
Adrenal glands are vital endocrine organs whose accurate segmentation on CT imaging presents significant challenges due to their small size and variable morphology. This study evaluates the efficacy of deep learning approaches for automatic adrenal gland segmentation from multiphase CT scans. We implemented three convolutional neural network architectures (U-Net, SegNet, and NablaNet) and assessed their performance on a dataset comprising 868 adrenal glands from contrast-enhanced abdominal CT scans. Performance was evaluated using the Dice similarity coefficient (DSC), alongside practical implementation metrics including training and deployment time. U-Net demonstrated superior segmentation performance (DSC: 0.630 ± 0.05 for right, 0.660 ± 0.06 for left adrenal glands) compared to NablaNet (DSC: 0.552 ± 0.08 for right, 0.550 ± 0.07 for left) and SegNet (DSC: 0.320 ± 0.10 for right, 0.335 ± 0.09 for left). While all models achieved high specificity, boundary delineation accuracy remained challenging. Our findings demonstrate the feasibility of deep learning-based adrenal gland segmentation while highlighting the persistent challenges in achieving the segmentation quality observed with larger abdominal organs. U-Net provides the optimal balance between accuracy and computational requirements, establishing a foundation for further refinement of AI-assisted adrenal imaging tools. Full article
Show Figures

Figure 1

19 pages, 4766 KiB  
Article
Research on Soil Pore Segmentation of CT Images Based on MMLFR-UNet Hybrid Network
by Changfeng Qin, Jie Zhang, Yu Duan, Chenyang Li, Shanzhi Dong, Feng Mu, Chengquan Chi and Ying Han
Agronomy 2025, 15(5), 1170; https://doi.org/10.3390/agronomy15051170 - 11 May 2025
Viewed by 568
Abstract
Accurate segmentation of soil pore structure is crucial for studying soil water migration, nutrient cycling, and gas exchange. However, the low-contrast and high-noise CT images in complex soil environments cause the traditional segmentation methods to have obvious deficiencies in accuracy and robustness. This [...] Read more.
Accurate segmentation of soil pore structure is crucial for studying soil water migration, nutrient cycling, and gas exchange. However, the low-contrast and high-noise CT images in complex soil environments cause the traditional segmentation methods to have obvious deficiencies in accuracy and robustness. This paper proposes a hybrid model combining a Multi-Modal Low-Frequency Reconstruction algorithm (MMLFR) and UNet (MMLFR-UNet). MMLFR enhances the key feature expression by extracting the image low-frequency signals and suppressing the noise interference through the multi-scale spectral decomposition, whereas UNet excels in the segmentation detail restoration and complexity boundary processing by virtue of its coding-decoding structure and the hopping connection mechanism. In this paper, an undisturbed soil column was collected in Hainan Province, China, which was classified as Ferralsols (FAO/UNESCO), and CT scans were utilized to acquire high-resolution images and generate high-quality datasets suitable for deep learning through preprocessing operations such as fixed-layer sampling, cropping, and enhancement. The results show that MMLFR-UNet outperforms UNet and traditional methods (e.g., Otsu and Fuzzy C-Means (FCM)) in terms of Intersection over Union (IoU), Dice Similarity Coefficients (DSC), Pixel Accuracy (PA), and boundary similarity. Notably, this model exhibits exceptional robustness and precision in segmentation tasks involving complex pore structures and low-contrast images. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop