Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,362)

Search Parameters:
Keywords = medical-image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 555 KB  
Article
A Symmetric Multiscale Detail-Guided Attention Network for Cardiac MR Image Semantic Segmentation
by Hengqi Hu, Bin Fang, Bin Duo, Xuekai Wei, Jielu Yan, Weizhi Xian and Dongfen Li
Symmetry 2025, 17(11), 1807; https://doi.org/10.3390/sym17111807 (registering DOI) - 27 Oct 2025
Abstract
Cardiac medical image segmentation can advance healthcare and embedded vision systems. In this paper, a symmetric semantic segmentation architecture for cardiac magnetic resonance (MR) images based on a symmetric multiscale detail-guided attention network is presented. Detailed information and multiscale attention maps can be [...] Read more.
Cardiac medical image segmentation can advance healthcare and embedded vision systems. In this paper, a symmetric semantic segmentation architecture for cardiac magnetic resonance (MR) images based on a symmetric multiscale detail-guided attention network is presented. Detailed information and multiscale attention maps can be exploited more efficiently in this model. A symmetric encoder and decoder are used to generate high-dimensional semantic feature maps and segmentation masks, respectively. First, a series of densely connected residual blocks is introduced for extracting high-dimensional semantic features. Second, an asymmetric detail-guided module is proposed. In this module, a feature pyramid is used to extract detailed information and generate detailed feature maps as part of the detail guidance of the model during the training phase, which are used to extract deep features of multiscale information and calculate a detail loss with specific encoder semantic features. Third, a series of multiscale upsampling attention blocks symmetrical to the encoder is introduced in the decoder of the model. For each upsampling attention block, feature fusion is first performed on the previous-level low-resolution features and the symmetric skip connections of the same layer, and then spatial and channel attention are used to enhance the features. Image gradients of the input images are also introduced at the end of the decoder. Finally, the predicted segmentation masks are obtained by calculating a detail loss and a segmentation loss. Our method demonstrates outstanding performance on the public cardiac MR image dataset, which can achieve significant results for endocardial and epicardial segmentation of the left ventricle (LV). Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
Show Figures

Figure 1

13 pages, 5261 KB  
Article
Atypical Presentations and Molecular Diagnosis of Ocular Bartonellosis
by Munirah Alafaleq and Christine Fardeau
Int. J. Mol. Sci. 2025, 26(21), 10421; https://doi.org/10.3390/ijms262110421 (registering DOI) - 27 Oct 2025
Abstract
To describe unusual findings and management of neuroretinitis in patients with cat scratch disease (CSD), their functional outcome after a case-oriented treatment was anaylsed, and the current literature was reviewed. A retrospective monocentric case series and a literature review. Review of medical records, [...] Read more.
To describe unusual findings and management of neuroretinitis in patients with cat scratch disease (CSD), their functional outcome after a case-oriented treatment was anaylsed, and the current literature was reviewed. A retrospective monocentric case series and a literature review. Review of medical records, multimodal imaging, and literature review. Five patients (four females and one male) with a mean age of 29.75 years (range: 11–71 years) had unusual findings of ocular bartonellosis, including inner retinitis, focal choroiditis, retinal microaneurysms, and bilateral sectorial optic nerve swelling. Bartonella-related ocular infections were not limited to the posterior segment of the eye. Molecular tests, such as polymerase chain reaction (PCR), showed that elevated markers of IgG titers were used and were positive in the aqueous humour of one patient. Reference to the use of intravitreal treatment in one of the cases was useful. Case-oriented management is associated with improvement in visual acuity, retinal, and choroidal lesions. The range of ocular signs of Bartonella infection could be extended. Molecular tests, such as PCR, are useful diagnostic approaches in the diagnosis of posterior uveitis. Treatment could require intravitreal antibiotic injections in unusual ocular bartonellosis. Full article
(This article belongs to the Special Issue Molecular Research in Ocular Inflammation and Infection)
Show Figures

Figure 1

24 pages, 1806 KB  
Article
Preoperative MRI-Based 3D Segmentation and Quantitative Modeling of Glandular and Adipose Tissues in Male Gynecomastia: A Retrospective Study
by Ziang Shi and Minqiang Xin
J. Clin. Med. 2025, 14(21), 7601; https://doi.org/10.3390/jcm14217601 (registering DOI) - 27 Oct 2025
Abstract
Background: This study aimed to explore the application value of magnetic resonance imaging (MRI)-based three-dimensional segmentation and reconstruction technology for spatial structural identification and volumetric quantification of glandular and adipose tissues in bilateral gynecomastia (GM) patients undergoing surgical treatment, hoping to provide precise [...] Read more.
Background: This study aimed to explore the application value of magnetic resonance imaging (MRI)-based three-dimensional segmentation and reconstruction technology for spatial structural identification and volumetric quantification of glandular and adipose tissues in bilateral gynecomastia (GM) patients undergoing surgical treatment, hoping to provide precise imaging data to support clinical surgical decision-making. Methods: A retrospective analysis was performed on preoperative MRI images and general clinical data of 52 patients with bilateral gynecomastia at the patient level (bilateral totals, N = 52) who underwent surgical treatment in the Department of Aesthetic and Reconstructive Breast Surgery, Plastic Surgery Hospital of Chinese Academy of Medical Sciences, from March 2023 to September 2024. All images were acquired using a SIEMENS Aera 1.5 T MRI scanner with T1-weighted three-dimensional fat-suppressed sequence (t1_fl3d_tra_spair). Semi-automatic segmentation and active contour modeling (Snake model) using ITK-SNAP 4.0 software were employed to independently identify glandular and adipose tissues, reconstruct accurate three-dimensional anatomical models, and quantitatively analyze tissue volumes. Results: The MRI-based three-dimensional segmentation and reconstruction method accurately distinguished glandular and adipose tissues in male breasts, establishing precise three-dimensional anatomical models with excellent reproducibility and operational consistency. Among the 52 patients with bilateral gynecomastia, glandular tissue volume exhibited a markedly non-normal distribution, with a median of 6.11 cm3 (IQR, 3.03–12.98 cm3). Adipose tissue volume followed a normal distribution with a mean of 1348.84 ± 494.97 cm3. The total breast tissue volume also showed a normal distribution, with a mean of 1361.97 ± 496.83 cm3. The proportion of glandular tissue in total breast volume was non-normally distributed with a median of 0.50% (IQR, 0.27–1.21%), while the proportion of adipose tissue was also non-normally distributed with a median of 99.50% (IQR, 98.79–99.73%). Conclusions: MRI combined with computer-assisted three-dimensional segmentation and reconstruction technology efficiently and accurately achieves spatial identification, three-dimensional modeling, and volumetric quantification of glandular and adipose tissues in patients with bilateral gynecomastia. It objectively reveals the spatial compositional characteristics of male breast tissues. This approach provides precise, quantitative data for clinical decision-making regarding surgical treatment of gynecomastia, featuring robust standardization and strong clinical applicability. Full article
Show Figures

Figure 1

20 pages, 13884 KB  
Article
Prototype-Guided Zero-Shot Medical Image Segmentation with Large Vision-Language Models
by Huong Pham and Samuel Cheng
Appl. Sci. 2025, 15(21), 11441; https://doi.org/10.3390/app152111441 - 26 Oct 2025
Abstract
Building on advances in promptable segmentation models, this work introduces a framework that integrates Large Vision-Language Model (LVLM) bounding box priors with prototype-based region of interest (ROI) selection to improve zero-shot medical image segmentation. Unlike prior methods such as SaLIP, which often misidentify [...] Read more.
Building on advances in promptable segmentation models, this work introduces a framework that integrates Large Vision-Language Model (LVLM) bounding box priors with prototype-based region of interest (ROI) selection to improve zero-shot medical image segmentation. Unlike prior methods such as SaLIP, which often misidentify regions due to reliance on text–image CLIP similarity, the proposed approach leverages visual prototypes to mitigate language bias and enhance ROI ranking, resulting in more accurate segmentation. Bounding box estimation is further strengthened through systematic prompt engineering to optimize LVLM performance across diverse datasets and imaging modalities. Evaluation was conducted on three publicly available benchmark datasets—CC359 (brain MRI), HC18 (fetal head ultrasound), and CXRMAL (chest X-ray)—without any task-specific fine-tuning. The proposed method achieved substantial improvements over prior approaches. On CC359, it reached a Dice score of 0.95 ± 0.06 and a mean Intersection-over-Union (mIoU) of 0.91 ± 0.10. On HC18, it attained a Dice score of 0.82 ± 0.20 and mIoU of 0.74 ± 0.22. On CXRMAL, the model achieved a Dice score of 0.90 ± 0.08 and mIoU of 0.83 ± 0.12. These standard deviations reflect variability across test images within each dataset, indicating the robustness of the proposed zero-shot framework. These results demonstrate that integrating LVLM-derived bounding box priors with prototype-based selection substantially advances zero-shot medical image segmentation. Full article
Show Figures

Figure 1

23 pages, 2069 KB  
Article
Early Lung Cancer Detection via AI-Enhanced CT Image Processing Software
by Joel Silos-Sánchez, Jorge A. Ruiz-Vanoye, Francisco R. Trejo-Macotela, Marco A. Márquez-Vera, Ocotlán Diaz-Parra, Josué R. Martínez-Mireles, Miguel A. Ruiz-Jaimes and Marco A. Vera-Jiménez
Diagnostics 2025, 15(21), 2691; https://doi.org/10.3390/diagnostics15212691 - 24 Oct 2025
Viewed by 177
Abstract
Background/Objectives: Lung cancer remains the leading cause of cancer-related mortality worldwide among both men and women. Early and accurate detection is essential to improve patient outcomes. This study explores the use of artificial intelligence (AI)-based software for the diagnosis of lung cancer through [...] Read more.
Background/Objectives: Lung cancer remains the leading cause of cancer-related mortality worldwide among both men and women. Early and accurate detection is essential to improve patient outcomes. This study explores the use of artificial intelligence (AI)-based software for the diagnosis of lung cancer through the analysis of medical images in DICOM format, aiming to enhance image visualization, preprocessing, and diagnostic precision in chest computed tomography (CT) scans. Methods: The proposed system processes DICOM medical images converted to standard formats (JPG or PNG) for preprocessing and analysis. An ensemble of classical machine learning algorithms—including Random Forest, Gradient Boosting, Support Vector Machine, and K-Nearest Neighbors—was implemented to classify pulmonary images and predict the likelihood of malignancy. Image normalization, denoising, segmentation, and feature extraction were performed to improve model reliability and reproducibility. Results: The AI-enhanced system demonstrated substantial improvements in diagnostic accuracy and robustness compared with individual classifiers. The ensemble model achieved a classification accuracy exceeding 90%, highlighting its effectiveness in identifying malignant and non-malignant lung nodules. Conclusions: The findings indicate that AI-assisted CT image processing can significantly contribute to the early detection of lung cancer. The proposed methodology enhances diagnostic confidence, supports clinical decision-making, and represents a viable step toward integrating AI into radiological workflows for early cancer screening. Full article
Show Figures

Figure 1

17 pages, 3456 KB  
Article
CT-Based Radiomic Models in Biopsy-Proven Liver Fibrosis Staging: Direct Comparison of Segmentation Types and Organ Inclusion
by Andreea Mihaela Morariu-Barb, Tudor Drugan, Mihai Adrian Socaciu, Horia Stefanescu, Andrei Demirel Morariu and Monica Lupsor-Platon
Diagnostics 2025, 15(21), 2671; https://doi.org/10.3390/diagnostics15212671 - 23 Oct 2025
Viewed by 155
Abstract
Background and Objectives: Liver fibrosis is the key prognostic factor in patients with chronic liver diseases (CLD). Computed tomography (CT) is widely used in clinical practice, but it has limited value in assessing liver fibrosis in precirrhotic stages. Quantitative CT analysis based [...] Read more.
Background and Objectives: Liver fibrosis is the key prognostic factor in patients with chronic liver diseases (CLD). Computed tomography (CT) is widely used in clinical practice, but it has limited value in assessing liver fibrosis in precirrhotic stages. Quantitative CT analysis based on radiomics can provide additional information by extracting hidden image patterns, but the optimal approach remains to be determined. The aims of this study were to evaluate automated CT-based radiomic models for predicting biopsy-proven liver fibrosis, to compare different segmentation strategies and organ inclusions approaches, and to assess its performance against vibration-controlled transient elastography (VCTE). We also examined whether these models could predict liver steatosis. Methods: In this retrospective study, 58 patients with biopsy-proven CLD and 9 controls underwent VCTE and contrast-enhanced abdominal CT within three months of biopsy. Radiomic features were extracted from portal-venous-phase images using both two-dimensional (2D) and three-dimensional (3D) segmentations of the liver, spleen, and combined liver–spleen. Multilayer perceptron neural (MLP) networks were trained to predict fibrosis staging (≥F1, ≥F2, ≥F3, and F4) and steatosis grading (≥S1, ≥S2, and S3). Model performance was assessed using area under the receiver operating characteristic curve (AUROC) and accuracy. Results: The 3D radiomic models outperformed 2D models in predicting liver fibrosis stages. In the 3D radiomic model category, the combined 3D liver–spleen model achieved very good to excellent performance (AUROCs 0.974, 0.929, 0.928, and 0.898, respectively, for ≥F1, ≥F2, ≥F3, and F4), with comparable results to VCTE (AUROCs 0.921, 0.957, 0.968, and 0.909, respectively, for ≥F1, ≥F2, ≥F3, and F4). Radiomic models showed poor predictive ability for steatosis grades (AUROCs 0.44–0.69) compared to controlled attenuation parameter (CAP) (AUROCs 0.798–0.917). Conclusions: CT-based radiomic models showed potential for predicting liver fibrosis stage. The 3D model of liver and spleen had the highest performance, comparable to VCTE. This approach could be valuable in clinical settings where elastography is unavailable or inconclusive and for opportunistic screening in patients already undergoing CT for other medical indications. In contrast, portal-venous-phase radiomics lacked predictive value for steatosis assessment. Larger, multicenter studies are required to validate these results. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Radiomics in Medical Diagnosis)
Show Figures

Figure 1

18 pages, 4081 KB  
Article
DAFSF: A Defect-Aware Fine Segmentation Framework Based on Hybrid Encoder and Adaptive Optimization for Image Analysis
by Xiaoyi Liu, Jianyu Zhu, Zhanyu Zhu and Jianjun He
Appl. Sci. 2025, 15(21), 11351; https://doi.org/10.3390/app152111351 - 23 Oct 2025
Viewed by 155
Abstract
Accurate image segmentation is a fundamental requirement for fine-grained image analysis, providing critical support for applications such as medical diagnostics, remote sensing, and industrial fault detection. However, in complex industrial environments, conventional deep learning-based methods often struggle with noisy backgrounds, blurred boundaries, and [...] Read more.
Accurate image segmentation is a fundamental requirement for fine-grained image analysis, providing critical support for applications such as medical diagnostics, remote sensing, and industrial fault detection. However, in complex industrial environments, conventional deep learning-based methods often struggle with noisy backgrounds, blurred boundaries, and highly imbalanced class distributions, which make fine-grained fault localization particularly challenging. To address these issues, this paper proposes a Defect-Aware Fine Segmentation Framework (DAFSF) that integrates three complementary components. First, a multi-scale hybrid encoder combines convolutional neural networks for capturing local texture details with Transformer-based modules for modeling global contextual dependencies. Second, a boundary-aware refinement module explicitly learns edge features to improve segmentation accuracy in damaged or ambiguous fault regions. Third, a defect-aware adaptive loss function jointly considers boundary weighting, hard-sample reweighting, and class balance, which enables the model to focus on challenging pixels while alleviating class imbalance. The proposed framework is evaluated on public benchmarks including Aeroscapes, Magnetic Tile Defect, and MVTec AD. The proposed DAFSF achieves mF1 scores of 85.3%, 85.9%, and 87.2%, and pixel accuracy (PA) of 91.5%, 91.8%, and 92.0% on the respective datasets. These findings highlight the effectiveness of the proposed framework for advancing fine-grained fault localization in industrial applications. Full article
Show Figures

Figure 1

22 pages, 2486 KB  
Review
Radiomics in Action: Multimodal Synergies for Imaging Biomarkers
by Everton Flaiban, Kaan Orhan, Bianca Costa Gonçalves, Sérgio Lúcio Pereira de Castro Lopes and Andre Luiz Ferreira Costa
Bioengineering 2025, 12(11), 1139; https://doi.org/10.3390/bioengineering12111139 - 22 Oct 2025
Viewed by 368
Abstract
Radiomics has recently begun as a transformative approach in medical imaging, shifting radiology from qualitative description to quantitative analysis. By extracting high-throughput features from CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET/CT (Positron Emission Tomography/Computed Tomography), and CBCT (Cone Beam Computed Tomography), radiomics [...] Read more.
Radiomics has recently begun as a transformative approach in medical imaging, shifting radiology from qualitative description to quantitative analysis. By extracting high-throughput features from CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET/CT (Positron Emission Tomography/Computed Tomography), and CBCT (Cone Beam Computed Tomography), radiomics enables the characterization of tissue heterogeneity and the development of imaging biomarkers with diagnostic, prognostic, and predictive values. This narrative review explores the historical evolution of radiomics and its methodological foundations, including acquisition, segmentation, feature extraction and modeling, and platforms supporting these workflows. Clinical applications are highlighted in oncology, cardiology, neurology, and musculoskeletal and dentomaxillofacial imaging. Despite being promising, radiomics faces challenges related to standardization, reproducibility, PACS/RIS (Picture Archiving and Communication System/Radiology Information System) integration and interpretability. Professional initiatives, such as the Image Biomarker Standardization Initiative (IBSI) and guidelines from radiological societies, are addressing these barriers by promoting harmonization and clinical translation. The ultimate vision is a radiomics-augmented radiology report in which validated biomarkers and predictive signatures complement conventional findings, thus enhancing objectivity, reproducibility, and advancing precision medicine. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Graphical abstract

36 pages, 1971 KB  
Review
Segmentation Algorithms in Fundus Images: A Review of Digital Image Analysis Techniques
by Laura Johana González Zazueta, Betsaida Lariza López Covarrubias, Christian Xavier Navarro Cota, Mabel Vázquez Briseño, Juan Iván Nieto Hipólito and Gener José Avilés Rodríguez
Appl. Sci. 2025, 15(21), 11324; https://doi.org/10.3390/app152111324 - 22 Oct 2025
Viewed by 386
Abstract
This study presents a comprehensive and critical review of segmentation algorithms applied to digital fundus images, aiming to identify computational strategies that balance diagnostic accuracy with practical feasibility in clinical environments. A systematic search following PRISMA guidelines was conducted for studies published between [...] Read more.
This study presents a comprehensive and critical review of segmentation algorithms applied to digital fundus images, aiming to identify computational strategies that balance diagnostic accuracy with practical feasibility in clinical environments. A systematic search following PRISMA guidelines was conducted for studies published between 2014 and 2025, encompassing deep learning, classical machine learning, hybrid, and semi-supervised approaches. The review examines how each methodological family performs in segmenting key anatomical structures such as blood vessels, the optic disc, and the fovea, considering both algorithmic and clinical metrics. Findings reveal that advanced deep learning models—particularly U-Net and CNN-based architectures—achieve superior accuracy in delineating complex and low-contrast structures but demand high computational resources. In contrast, traditional and hybrid methods offer efficient alternatives for real-time or low-resource settings, maintaining acceptable precision while minimizing cost. Importantly, the analysis underscores the persistent gap between methodological innovation and clinical translation, emphasizing the need for lightweight, clinically interpretable models that integrate algorithmic performance with medical relevance. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

16 pages, 2216 KB  
Article
Modeling of Severity Classification Algorithm Using Abdominal Aortic Aneurysm Computed Tomography Image Segmentation Based on U-Net with Improved Noise Reduction Performance
by Sewon Lim, Hajin Kim, Kang-Hyeon Seo and Youngjin Lee
Sensors 2025, 25(21), 6509; https://doi.org/10.3390/s25216509 - 22 Oct 2025
Viewed by 350
Abstract
Accurate segmentation of abdominal aortic aneurysm (AAA) from computed tomography (CT) images is critical for early diagnosis and treatment planning of vascular diseases. However, noise in CT images obscures vessel boundaries, reducing segmentation accuracy. U-Net is widely used for medical image segmentation, where [...] Read more.
Accurate segmentation of abdominal aortic aneurysm (AAA) from computed tomography (CT) images is critical for early diagnosis and treatment planning of vascular diseases. However, noise in CT images obscures vessel boundaries, reducing segmentation accuracy. U-Net is widely used for medical image segmentation, where noise removal is critical. This study applied various denoising filters for U-Net segmentation and classified the severity of segmented AAA images to evaluate accuracy. Poisson–Gaussian noise was added to AAA CT images, and then average, median, Wiener, and median-modified Wiener filters (MMWF) were applied. U-Net-based segmentation was performed, and the segmentation accuracy of the output images obtained per filter was quantitatively assessed. Furthermore, the Hough circle algorithm was applied to the segmented images for diameter measurement, enabling severity classification and evaluation of classification accuracy. MMWF application improved the Matthews correlation coefficient, Dice score, Jaccard coefficient, and mean surface distance by 31.09%, 34.25%, 53.99%, and 3.70%, respectively, compared with images with added noise. Moreover, classification based on the output images obtained after MMWF application demonstrated the highest accuracy, with sensitivity, precision, and accuracy reaching 100%. Thus, U-Net-based segmentation yields more accurate results when images are processed with the MMWF and analyzed using the Hough circle algorithm. Full article
(This article belongs to the Collection Biomedical Imaging and Sensing)
Show Figures

Figure 1

18 pages, 2025 KB  
Article
A Priori Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Using Deep Features from Pre-Treatment MRI and CT
by Deok Hyun Jang, Laurentius O. Osapoetra, Lakshmanan Sannachi, Belinda Curpen, Ana Pejović-Milić and Gregory J. Czarnota
Cancers 2025, 17(20), 3394; https://doi.org/10.3390/cancers17203394 - 21 Oct 2025
Viewed by 294
Abstract
Background: Response to neoadjuvant chemotherapy (NAC) is a key prognostic indicator in breast cancer, yet current assessment relies on postoperative pathology. This study investigated the use of deep features derived from pre-treatment MRI and CT scans, in conjunction with clinical variables, to [...] Read more.
Background: Response to neoadjuvant chemotherapy (NAC) is a key prognostic indicator in breast cancer, yet current assessment relies on postoperative pathology. This study investigated the use of deep features derived from pre-treatment MRI and CT scans, in conjunction with clinical variables, to predict treatment response a priori. Methods: Two response endpoints were analyzed: pathologic complete response (pCR) versus non-pCR, and responders versus non-responders, with response defined as a reduction in tumor size of at least 30%. Intratumoral and peritumoral segmentations were generated on contrast-enhanced T1-weighted (CE-T1) and T2-weighted MRI, as well as contrast-enhanced CT images of tumors. Deep features were extracted from these regions using ResNet10, ResNet18, ResNet34, and ResNet50 architectures pre-trained with MedicalNet. Handcrafted radiomic features were also extracted for comparison. Feature selection was conducted with minimum redundancy maximum relevance (mRMR) followed by recursive feature elimination (RFE), and classification was performed using XGBoost across ten independent data partitions. Results: A total of 177 patients were analyzed in this study. ResNet34-derived features achieved the highest overall classification performance under both criteria, outperforming handcrafted features and deep features from other ResNet architectures. For distinguishing pCR from non-pCR, ResNet34 achieved a balanced accuracy of 81.6%, whereas handcrafted radiomics achieved 77.9%. For distinguishing responders from non-responders, ResNet34 achieved a balanced accuracy of 73.5%, compared with 70.2% for handcrafted radiomics. Conclusions: Deep features extracted from routinely acquired MRI and CT, when combined with clinical information, improve the prediction of NAC response in breast cancer. This multimodal framework demonstrates the value of deep learning-based approaches as a complement to handcrafted radiomics and provides a basis for more individualized treatment strategies. Full article
(This article belongs to the Special Issue CT/MRI/PET in Cancer)
Show Figures

Figure 1

18 pages, 11753 KB  
Article
SemiSeg-CAW: Semi-Supervised Segmentation of Ultrasound Images by Leveraging Class-Level Information and an Adaptive Multi-Loss Function
by Somayeh Barzegar and Naimul Khan
Mach. Learn. Knowl. Extr. 2025, 7(4), 124; https://doi.org/10.3390/make7040124 - 20 Oct 2025
Viewed by 243
Abstract
The limited availability of pixel-level annotated medical images complicates training supervised segmentation models, as these models require large datasets. To deal with this issue, SemiSeg-CAW, a semi-supervised segmentation framework that leverages class-level information and an adaptive multi-loss function, is proposed to reduce dependency [...] Read more.
The limited availability of pixel-level annotated medical images complicates training supervised segmentation models, as these models require large datasets. To deal with this issue, SemiSeg-CAW, a semi-supervised segmentation framework that leverages class-level information and an adaptive multi-loss function, is proposed to reduce dependency on extensive annotations. The model combines segmentation and classification tasks in a multitask architecture that includes segmentation, classification, weight generation, and ClassElevateSeg modules. In this framework, the ClassElevateSeg module is initially pre-trained and then fine-tuned jointly with the main model to produce auxiliary feature maps that support the main model, while the adaptive weighting strategy computes a dynamic combination of classification and segmentation losses using trainable weights. The proposed approach enables effective use of both labeled and unlabeled images with class-level information by compensating for the shortage of pixel-level labels. Experimental evaluation on two public ultrasound datasets demonstrates that SemiSeg-CAW consistently outperforms fully supervised segmentation models when trained with equal or fewer labeled samples. The results suggest that incorporating class-level information with adaptive loss weighting provides an effective strategy for semi-supervised medical image segmentation and can improve the segmentation performance in situations with limited annotations. Full article
Show Figures

Figure 1

21 pages, 2977 KB  
Article
Dataset-Aware Preprocessing for Hippocampal Segmentation: Insights from Ablation and Transfer Learning
by Faizaan Fazal Khan, Jun-Hyung Kim, Ji-In Kim and Goo-Rak Kwon
Mathematics 2025, 13(20), 3309; https://doi.org/10.3390/math13203309 - 16 Oct 2025
Viewed by 170
Abstract
Accurate hippocampal segmentation in 3D MRI is essential for neurodegenerative disease research and diagnosis. Preprocessing pipelines can strongly influence segmentation accuracy, yet their impact across datasets and in transfer learning scenarios remains underexplored. This study systematically compares a No Preprocessing (NP) pipeline and [...] Read more.
Accurate hippocampal segmentation in 3D MRI is essential for neurodegenerative disease research and diagnosis. Preprocessing pipelines can strongly influence segmentation accuracy, yet their impact across datasets and in transfer learning scenarios remains underexplored. This study systematically compares a No Preprocessing (NP) pipeline and a Full Preprocessing (FP) pipeline for hippocampal segmentation on the EADC-ADNI HarP clinical dataset and the multi-site MSD dataset using a 3D U-Net with residual connections and dropout regularization. Evaluations employed standard overlap metrics, Hausdorff Distance (HD), and Wilcoxon signed-rank tests, complemented by qualitative analysis. Results show that NP consistently outperformed FP in Dice, Jaccard, and F1 metrics on HarP (e.g., Dice 0.8876 vs. 0.8753, p < 0.05), while FP achieved superior HD, indicating better boundary precision. Similar trends emerged in transfer learning from MSD to HarP, with NP improving overlap measures and FP maintaining lower HD. To test whether the findings generalize across architectures, experiments on Harp Dataset were also repeated with a 3D V-Net backbone, which reproduced the same trend. Comparative analysis with recent studies confirmed the competitiveness of the proposed approach despite lower input resolution and reduced model complexity. These findings highlight that preprocessing choice should be tailored to dataset characteristics and the target evaluation metric. The results provide practical guidance for selecting segmentation workflows in clinical and multi-center neuroimaging applications. Full article
(This article belongs to the Special Issue The Application of Deep Neural Networks in Image Processing)
Show Figures

Figure 1

31 pages, 3812 KB  
Review
Generative Adversarial Networks in Dermatology: A Narrative Review of Current Applications, Challenges, and Future Perspectives
by Rosa Maria Izu-Belloso, Rafael Ibarrola-Altuna and Alex Rodriguez-Alonso
Bioengineering 2025, 12(10), 1113; https://doi.org/10.3390/bioengineering12101113 - 16 Oct 2025
Viewed by 494
Abstract
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores [...] Read more.
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores the landscape of GAN applications in dermatology, systematically analyzing 27 key studies and identifying 11 main clinical use cases. These range from the synthesis of under-represented skin phenotypes to segmentation, denoising, and super-resolution imaging. The review also examines the commercial implementations of GAN-based solutions relevant to practicing dermatologists. We present a comparative summary of GAN architectures, including DCGAN, cGAN, StyleGAN, CycleGAN, and advanced hybrids. We analyze technical metrics used to evaluate performance—such as Fréchet Inception Distance (FID), SSIM, Inception Score, and Dice Coefficient—and discuss challenges like data imbalance, overfitting, and the lack of clinical validation. Additionally, we review ethical concerns and regulatory limitations. Our findings highlight the transformative potential of GANs in dermatology while emphasizing the need for standardized protocols and rigorous validation. While early results are promising, few models have yet reached real-world clinical integration. The democratization of AI tools and open-access datasets are pivotal to ensure equitable dermatologic care across diverse populations. This review serves as a comprehensive resource for dermatologists, researchers, and developers interested in applying GANs in dermatological practice and research. Future directions include multimodal integration, clinical trials, and explainable GANs to facilitate adoption in daily clinical workflows. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

18 pages, 2949 KB  
Article
UNETR++ with Voxel-Focused Attention: Efficient 3D Medical Image Segmentation with Linear-Complexity Transformers
by Sithembiso Ntanzi and Serestina Viriri
Appl. Sci. 2025, 15(20), 11034; https://doi.org/10.3390/app152011034 - 14 Oct 2025
Viewed by 415
Abstract
There have been significant breakthroughs in developing models for segmenting 3D medical images, with many promising results attributed to the incorporation of Vision Transformers (ViT). However, the fundamental mechanism of transformers, known as self-attention, has quadratic complexity, which significantly increases computational requirements, especially [...] Read more.
There have been significant breakthroughs in developing models for segmenting 3D medical images, with many promising results attributed to the incorporation of Vision Transformers (ViT). However, the fundamental mechanism of transformers, known as self-attention, has quadratic complexity, which significantly increases computational requirements, especially in the case of 3D medical images. In this paper, we investigate the UNETR++ model and propose a voxel-focused attention mechanism inspired by TransNeXt pixel-focused attention. The core component of UNETR++ is the Efficient Paired Attention (EPA) block, which learns from two interdependent branches: spatial and channel attention. For spatial attention, we incorporated the voxel-focused attention mechanism, which has linear complexity with respect to input sequence length, rather than projecting the keys and values into lower dimensions. The deficiency of UNETR++ lies in its reliance on dimensionality reduction for spatial attention, which reduces efficiency but risks information loss. Our contribution is to replace this with a voxel-focused attention design that achieves linear complexity without low-dimensional projection, thereby reducing parameters while preserving representational power. This effectively reduces the model’s parameter count while maintaining competitive performance and inference speed. On the Synapse dataset, the enhanced UNETR++ model contains 21.42 M parameters, a 50% reduction from the original 42.96 M, while achieving a competitive Dice score of 86.72%. Full article
Show Figures

Figure 1

Back to TopTop