Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (326)

Search Parameters:
Keywords = ultrasound image segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 4259 KB  
Article
Image-Based Segmentation of Hydrogen Bubbles in Alkaline Electrolysis: A Comparison Between Ilastik and U-Net
by José Pereira, Reinaldo Souza, Arthur Normand and Ana Moita
Algorithms 2026, 19(1), 77; https://doi.org/10.3390/a19010077 - 16 Jan 2026
Abstract
This study aims to enhance the efficiency of hydrogen production through alkaline water electrolysis by analyzing hydrogen bubble dynamics using high-speed image processing and machine learning algorithms. The experiments were conducted to evaluate the effects of electrical current and ultrasound oscillations on the [...] Read more.
This study aims to enhance the efficiency of hydrogen production through alkaline water electrolysis by analyzing hydrogen bubble dynamics using high-speed image processing and machine learning algorithms. The experiments were conducted to evaluate the effects of electrical current and ultrasound oscillations on the system performance. The bubble formation and detachment process were recorded and analyzed using two segmentation models: Ilastik, a GUI-based tool, and U-Net, a deep learning convolutional network implemented in PyTorch. v. 2.9.0. Both models were trained on a dataset of 24 images under varying experimental conditions. The evaluation metrics included Intersection over Union (IoU), Root Mean Square Error (RMSE), and bubble diameter distribution. Ilastik achieved better accuracy and lower RMSE, while U-Net. U-Net offered higher scalability and integration flexibility within Python environments. Both models faced challenges when detecting small bubbles and under complex lighting conditions. Improvements such as expanding the training dataset, increasing image resolution, and adopting patch-based processing were proposed. Overall, the result demonstrates the automated image segmentation can provide reliable bubble characterization, contributing to the optimization of electrolysis-based hydrogen production. Full article
Show Figures

Figure 1

14 pages, 623 KB  
Article
Improved Multisource Image-Based Diagnostic for Thyroid Cancer Detection: ANTHEM National Complementary Plan Research Project
by Domenico Parmeggiani, Alessio Cece, Massimo Agresti, Francesco Miele, Pasquale Luongo, Giancarlo Moccia, Francesco Torelli, Rossella Sperlongano, Paola Bassi, Mehrdad Savabi Far, Shima Tajabadi, Agostino Fernicola, Marina Di Domenico, Federica Colapietra, Paola Della Monica, Stefano Avenia and Ludovico Docimo
Appl. Sci. 2026, 16(2), 830; https://doi.org/10.3390/app16020830 - 13 Jan 2026
Viewed by 227
Abstract
Thyroid nodule evaluation relies heavily on ultrasound imaging, yet it suffers from significant inter-operator variability. To address this, we present a preliminary validation of the Synergy-Net platform, an AI-driven Computer-Aided Diagnosis (CAD) system designed to standardize acquisition and improve diagnostic accuracy. The system [...] Read more.
Thyroid nodule evaluation relies heavily on ultrasound imaging, yet it suffers from significant inter-operator variability. To address this, we present a preliminary validation of the Synergy-Net platform, an AI-driven Computer-Aided Diagnosis (CAD) system designed to standardize acquisition and improve diagnostic accuracy. The system integrates a U-Net architecture for anatomical segmentation and a ResNet-50 classifier for lesion characterization within a Human-in-the-Loop (HITL) workflow. The study enrolled 110 patients (71 benign, 39 malignant) undergoing surgery. Performance was evaluated against histopathological ground truth. The system achieved an Accuracy of 90.35% (95% CI: 88.2–92.5%), Sensitivity of 90.64% (95% CI: 87.9–93.4%), and an AUC of 0.90. Furthermore, the framework introduces a multimodal approach, performing late fusion of imaging features with genomic profiles (TruSight One panel). While current results validate the 2D diagnostic pipeline, the discussion outlines the transition to the ANTHEM framework, incorporating future 3D volumetric analysis and digital pathology integration. These findings suggest that AI-assisted standardization can significantly enhance diagnostic precision, though multi-center validation remains necessary. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1308 KB  
Article
MFA-Net: Multiscale Feature Attention Network for Medical Image Segmentation
by Jia Zhao, Han Tao, Song Liu, Meilin Li and Huilong Jin
Electronics 2026, 15(2), 330; https://doi.org/10.3390/electronics15020330 - 12 Jan 2026
Viewed by 103
Abstract
Medical image segmentation acts as a foundational element of medical image analysis. Yet its accuracy is frequently limited by the scale fluctuations of anatomical targets and the intricate contextual traits inherent in medical images—including vaguely defined structural boundaries and irregular shape distributions. To [...] Read more.
Medical image segmentation acts as a foundational element of medical image analysis. Yet its accuracy is frequently limited by the scale fluctuations of anatomical targets and the intricate contextual traits inherent in medical images—including vaguely defined structural boundaries and irregular shape distributions. To tackle these constraints, we design a multi-scale feature attention network (MFA-Net), customized specifically for thyroid nodule, skin lesion, and breast lesion segmentation tasks. This network framework integrates three core components: a Bidirectional Feature Pyramid Network (Bi-FPN), a Slim-neck structure, and the Convolutional Block Attention Module (CBAM). CBAM steers the model to prioritize boundary regions while filtering out irrelevant information, which in turn enhances segmentation precision. Bi-FPN facilitates more robust fusion of multi-scale features via iterative integration of top-down and bottom-up feature maps, supported by lateral and vertical connection pathways. The Slim-neck design is constructed to simplify the network’s architecture while effectively merging multi-scale representations of both target and background areas, thus enhancing the model’s overall performance. Validation across four public datasets covering thyroid ultrasound (TNUI-2021, TN-SCUI 2020), dermoscopy (ISIC 2016), and breast ultrasound (BUSI) shows that our method outperforms state-of-the-art segmentation approaches, achieving Dice similarity coefficients of 0.955, 0.971, 0.976, and 0.846, respectively. Additionally, the model maintains a compact parameter count of just 3.05 million and delivers an extremely fast inference latency of 1.9 milliseconds—metrics that significantly outperform those of current leading segmentation techniques. In summary, the proposed framework demonstrates strong performance in thyroid, skin, and breast lesion segmentation, delivering an optimal trade-off between high accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application: Second Edition)
Show Figures

Figure 1

12 pages, 1495 KB  
Case Report
A Case of Misdiagnosed Hepatic Sarcoidosis: Evaluating Ultrasound Resolution Microscopy for Differentiating Hepatic Sarcoidosis from Hepatocellular Carcinoma
by Jie Zhang, Kazushi Numata, Jintian Zhang, Wenbin Zhang and Feiqian Wang
Diagnostics 2026, 16(2), 238; https://doi.org/10.3390/diagnostics16020238 - 12 Jan 2026
Viewed by 129
Abstract
Background and Clinical Significance: Hepatic sarcoidosis is a benign lesion of unknown etiology. The gold standard for diagnosing hepatic sarcoidosis is histopathological examination. The symptoms and imaging findings of patients with hepatic sarcoidosis are often atypical, leading to misdiagnosis as hepatocellular carcinoma (HCC). [...] Read more.
Background and Clinical Significance: Hepatic sarcoidosis is a benign lesion of unknown etiology. The gold standard for diagnosing hepatic sarcoidosis is histopathological examination. The symptoms and imaging findings of patients with hepatic sarcoidosis are often atypical, leading to misdiagnosis as hepatocellular carcinoma (HCC). Ultrasound resolution microscopy (URM) can overcome the diffraction limit, enabling fine visualization and quantitative analysis of the microvascular networks. This study aimed to provide new evidence for the differential diagnosis of these two diseases by comparing the URM parameters of hepatic sarcoidosis initially misdiagnosed as HCC with those of HCC. Case Presentation: A 67-year-old woman was admitted to the hospital due to upper abdominal pain for two weeks. Ultrasonography revealed a liver mass. The lesion was located in segment IV of the left hepatic lobe, was approximately 18 × 10 mm in size, and appeared hypoechoic. Contrast-enhanced ultrasound and enhanced magnetic resonance imaging both showed a “fast-in, fast-out” pattern, strongly suggesting HCC. The tumor markers were within the normal range. The patient underwent a laparoscopic left hepatic lobectomy. The histopathological diagnosis of the resected specimen was “hepatic sarcoidosis”. URM examination was performed during the preoperative diagnostic process. Subsequently, the URM parameters of the patient’s lesion were analyzed and compared with those of HCC. The results showed differences in multiple URM parameters, including microvascular flow velocity, diameter, microvascular density ratio, and vascular distribution, between this case of hepatic sarcoidosis and HCC. Conclusions: URM can quantitatively and multidimensionally evaluate the microvasculature of liver lesions, providing new reference data for the diagnosis and differential diagnosis of hepatic sarcoidosis. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

28 pages, 5526 KB  
Article
Symmetry-Aware SwinUNet with Integrated Attention for Transformer-Based Segmentation of Thyroid Ultrasound Images
by Ammar Oad, Imtiaz Hussain Koondhar, Feng Dong, Weibing Liu, Beiji Zou, Weichun Liu, Yun Chen and Yaoqun Wu
Symmetry 2026, 18(1), 141; https://doi.org/10.3390/sym18010141 - 10 Jan 2026
Viewed by 177
Abstract
Accurate segmentation of thyroid nodules in ultrasound images remains challenging due to low contrast, speckle noise, and inter-patient variability that disrupt the inherent spatial symmetry of thyroid anatomy. This study proposes a symmetry-aware SwinUNet framework with integrated spatial attention for thyroid nodule segmentation. [...] Read more.
Accurate segmentation of thyroid nodules in ultrasound images remains challenging due to low contrast, speckle noise, and inter-patient variability that disrupt the inherent spatial symmetry of thyroid anatomy. This study proposes a symmetry-aware SwinUNet framework with integrated spatial attention for thyroid nodule segmentation. The hierarchical window-based Swin Transformer encoder preserves spatial symmetry and scale consistency while capturing both global contextual information and fine-grained local features. Attention modules in the decoder emphasize symmetry consistent anatomical regions and asymmetric nodule boundaries, effectively suppressing irrelevant background responses. The proposed method was evaluated on the publicly available TN3K thyroid ultrasound dataset. Experimental results demonstrate strong performance, achieving a Dice Similarity Coefficient of 85.51%, precision of 87.05%, recall of 89.13%, an IoU of 78.00%, accuracy of 97.02%, and an AUC of 99.02%. Compared with the baseline model, the proposed approach improves the IoU and Dice score by 15.38% and 12.05%, respectively, confirming its ability to capture symmetry-preserving nodule morphology and boundary asymmetry. These findings indicate that the proposed symmetry-aware SwinUNet provides a robust and clinically promising solution for thyroid ultrasound image analysis and computer-aided diagnosis. Full article
Show Figures

Figure 1

23 pages, 1306 KB  
Systematic Review
From Testis to Retroperitoneum: The Role of Radiomics and Artificial Intelligence for Primary Tumors and Nodal Disease in Testicular Cancer: A Systematic Review
by Teodora Telecan, Vlad Cristian Munteanu, Adriana Ioana Gaia-Oltean, Carmen-Bianca Crivii and Roxana-Denisa Capraș
Medicina 2026, 62(1), 125; https://doi.org/10.3390/medicina62010125 - 7 Jan 2026
Viewed by 143
Abstract
Background and Objectives: Radiomics and artificial intelligence (AI) offer emerging quantitative tools for enhancing the diagnostic evaluation of testicular cancer. Conventional imaging—ultrasound (US), magnetic resonance imaging (MRI), and computed tomography (CT)—remains central to management but has limited ability to characterize tumor subtypes, [...] Read more.
Background and Objectives: Radiomics and artificial intelligence (AI) offer emerging quantitative tools for enhancing the diagnostic evaluation of testicular cancer. Conventional imaging—ultrasound (US), magnetic resonance imaging (MRI), and computed tomography (CT)—remains central to management but has limited ability to characterize tumor subtypes, detect occult nodal disease, or differentiate necrosis, teratoma, and viable tumor in post-chemotherapy residual masses. This systematic review summarizes current advances in radiomics and AI for both primary tumors and retroperitoneal disease. Materials and Methods: A systematic search of PubMed, Scopus, and Web of Science identified studies applying radiomics or AI to testicular tumors, retroperitoneal lymph nodes and post-chemotherapy residual masses. Eligible studies included quantitative imaging analyses performed on ultrasound, MRI, and CT, with optional integration of clinical or molecular biomarkers. Eighteen studies met inclusion criteria and were evaluated with respect to methodological design, diagnostic performance, and translational readiness. Results: Across modalities, radiomics demonstrated encouraging discriminatory capacity, with accuracies of 74–82% for ultrasound, 80.7–97.9% for MRI, and 71.7–85.3% for CT. CT-based radiomics for post-chemotherapy residual masses showed moderate ability to distinguish necrosis/fibrosis, teratoma, and viable germ-cell tumor, though heterogeneous methodologies and limited external validation constrained generalizability. The strongest performance was observed in multimodal approaches: integrating radiomics with clinical variables or circulating microRNAs improved accuracy by up to 12% and 15%, respectively, mirroring gains reported in other oncologic radiomics applications. Persisting variability in segmentation practices, acquisition protocols, feature extraction, and machine-learning methods highlights ongoing barriers to reproducibility. Conclusions: Radiomics and AI-enhanced frameworks represent promising adjuncts for improving the noninvasive evaluation of testicular cancer, particularly when combined with clinical or molecular biomarkers. Future progress will depend on standardized imaging protocols, harmonized radiomics pipelines, and multicenter prospective validation. With continued methodological refinement and clinical integration, radiomics may support more precise risk stratification and reduce unnecessary interventions in testicular cancer. Full article
(This article belongs to the Special Issue Medical Imaging in the Detection of Urological Malignancies)
Show Figures

Figure 1

14 pages, 817 KB  
Article
Deep Learning-Based Segmentation of the Ulnar Nerve in Ultrasound Images
by Matthew Bailey Webster, Ko Eun Kim, Yong Jae Na, Joonnyong Lee and Beom Suk Kim
Medicina 2026, 62(1), 113; https://doi.org/10.3390/medicina62010113 - 5 Jan 2026
Viewed by 180
Abstract
Background and Objectives: We evaluate deep learning-based segmentation methods for detecting the ulnar nerve in ultrasound (US) images, leveraging the first-ever large US dataset of the ulnar nerve. We compare several widely used segmentation models, analyze their performance, and evaluate several common [...] Read more.
Background and Objectives: We evaluate deep learning-based segmentation methods for detecting the ulnar nerve in ultrasound (US) images, leveraging the first-ever large US dataset of the ulnar nerve. We compare several widely used segmentation models, analyze their performance, and evaluate several common data augmentation techniques for the US. Materials and Methods: Our analysis is conducted on a large dataset of 4789 US images from 545 patients, with expert-annotated ground-truth segmentations of the ulnar nerve, and uses six segmentation models with several backbone architectures. Further, we analyze the statistical significance of five common data augmentation techniques on segmentation performance: flipping, rotation, shearing, contrast and brightness adjustments, and resizing. Results: In this study, the shear, rotate, and resize augmentations consistently improved segmentation performance across multiple runs, with p-values < 0.05 in a paired t-test relative to the no-augmentation baseline. Furthermore, we showed that newer architectures do not provide any metric improvements over traditional U-Net models, which achieved a Dice score of 0.88 and an IoU of 0.81. Conclusions: Through our systematic analysis of segmentation models and data augmentation strategies, we provide key insights into optimizing deep learning approaches for ulnar nerve segmentation and other US-based nerve segmentation tasks. Full article
Show Figures

Figure 1

23 pages, 2023 KB  
Review
Enhanced Imaging of Ocular Surface Lesions
by Wisam O. Najdawi, William R. Herskowitz, Diego E. Alba, Omar Badla, Pragat J. Muthu, Anat Galor and Carol L. Karp
J. Clin. Med. 2026, 15(1), 289; https://doi.org/10.3390/jcm15010289 - 30 Dec 2025
Viewed by 252
Abstract
Ocular surface lesions represent a diverse group of pathologies which may be challenging to diagnose clinically. Anterior segment imaging—including anterior segment optical coherence tomography (AS-OCT), optical coherence tomography angiography (OCTA), ultrasound biomicroscopy (UBM), and in vivo confocal microscopy (IVCM)—provides valuable adjunct information for [...] Read more.
Ocular surface lesions represent a diverse group of pathologies which may be challenging to diagnose clinically. Anterior segment imaging—including anterior segment optical coherence tomography (AS-OCT), optical coherence tomography angiography (OCTA), ultrasound biomicroscopy (UBM), and in vivo confocal microscopy (IVCM)—provides valuable adjunct information for the diagnosis, management, and monitoring of these lesions. The present review aims to provide an update on the principles, current clinical applications, advantages, limitations, and recent advancements in the imaging modalities used for the evaluation of ocular surface lesions. Notable recent advancements include the application of artificial intelligence in the interpretation of AS-OCT, intraoperative use of AS-OCT, the development of three-dimensional UBM, and expanded applications of each modality for a variety of ocular surface lesions. Full article
Show Figures

Figure 1

26 pages, 6899 KB  
Article
When RNN Meets CNN and ViT: The Development of a Hybrid U-Net for Medical Image Segmentation
by Ziru Wang and Ziyang Wang
Fractal Fract. 2026, 10(1), 18; https://doi.org/10.3390/fractalfract10010018 - 28 Dec 2025
Viewed by 593
Abstract
Deep learning for semantic segmentation has made significant advances in recent years, achieving state-of-the-art performance. Medical image segmentation, as a key component of healthcare systems, plays a vital role in the diagnosis and treatment planning of diseases. Due to the fractal and scale-invariant [...] Read more.
Deep learning for semantic segmentation has made significant advances in recent years, achieving state-of-the-art performance. Medical image segmentation, as a key component of healthcare systems, plays a vital role in the diagnosis and treatment planning of diseases. Due to the fractal and scale-invariant nature of biological structures, effective medical image segmentation requires models capable of capturing hierarchical and self-similar representations across multiple spatial scales. In this paper, a Recurrent Neural Network (RNN) is explored within the Convolutional Neural Network (CNN) and Vision Transformer (ViT)-based hybrid U-shape network, named RCV-UNet. First, the ViT-based layer was developed in the bottleneck to effectively capture the global context of an image and establish long-range dependencies through the self-attention mechanism. Second, recurrent residual convolutional blocks (RRCBs) were introduced in both the encoder and decoder to enhance the ability to capture local features and preserve fine details. Third, by integrating the global feature extraction capability of ViT with the local feature enhancement strength of RRCBs, RCV-UNet achieved promising global consistency and boundary refinement, addressing key challenges in medical image segmentation. From a fractal–fractional perspective, the multi-scale encoder–decoder hierarchy and attention-driven aggregation in RCV-UNet naturally accommodate fractal-like, scale-invariant regularity, while the recurrent and residual connections approximate fractional-order dynamics in feature propagation, enabling continuous and memory-aware representation learning. The proposed RCV-UNet was evaluated on four different modalities of images, including CT, MRI, Dermoscopy, and ultrasound, using the Synapse, ACDC, ISIC 2018, and BUSI datasets. Experimental results demonstrate that RCV-UNet outperforms other popular baseline methods, achieving strong performance across different segmentation tasks. The code of the proposed method will be made publicly available. Full article
Show Figures

Figure 1

25 pages, 633 KB  
Review
Diagnosis and Surgical Management for Advanced Pancreatic Cancer Requiring Vascular Resection
by Symeou Solonas, Lolis D. Evangelos and Glantzounis K. Georgios
Diagnostics 2026, 16(1), 102; https://doi.org/10.3390/diagnostics16010102 - 28 Dec 2025
Viewed by 558
Abstract
Pancreatic ductal adenocarcinoma (PDAC) remains one of the most aggressive malignancies, with overall survival outcomes that have improved only modestly in recent years. Careful preoperative evaluation is essential for defining resectability and planning surgery. Modern imaging modalities, including high-resolution, contrast-enhanced CT, MRI and [...] Read more.
Pancreatic ductal adenocarcinoma (PDAC) remains one of the most aggressive malignancies, with overall survival outcomes that have improved only modestly in recent years. Careful preoperative evaluation is essential for defining resectability and planning surgery. Modern imaging modalities, including high-resolution, contrast-enhanced CT, MRI and endoscopic ultrasound, provide a detailed assessment of vascular involvement and allow accurate staging according to various international criteria and consensus statements. In borderline and locally advanced cases, neoadjuvant therapy can aid in downsizing the tumor and increasing the likelihood of achieving negative margin resection (R0), offering long-term survival along with quality of life. When vascular invasion limits resectability, venous resection and reconstruction may permit an R0 resection in patients with borderline resectable disease that is both technically operable and physiologically tolerable for the patient. Arterial resection, however, remains controversial and is rarely justified because of its limited perioperative and survival benefits. Arterial divestment has emerged as an interesting alternative, allowing tumor clearance while avoiding full arterial reconstruction. Vascular reconstructions can be achieved through venorrhapy, end-to-end anastomosis, or segmental replacement using either autologous or synthetic grafts. With the advances in neoadjuvant treatment, the appropriate selection of candidates for vascular resection significantly increases the resectability rate, offering long-term survival along with satisfactory quality of life. In this review, a detailed literature review is performed regarding the best strategies in the diagnosis and surgical management of patients with borderline resectable and locally advanced pancreatic cancer requiring vascular resection. Full article
(This article belongs to the Special Issue Current Diagnosis and Treatment in Surgical Oncology)
Show Figures

Figure 1

18 pages, 2081 KB  
Article
Breast Ultrasound Image Segmentation Integrating Mamba-CNN and Feature Interaction
by Guoliang Yang, Yuyu Zhang and Hao Yang
Sensors 2026, 26(1), 105; https://doi.org/10.3390/s26010105 - 23 Dec 2025
Viewed by 493
Abstract
The large scale and shape variation in breast lesions make their segmentation extremely challenging. A breast ultrasound image segmentation model integrating Mamba-CNN and feature interaction is proposed for breast ultrasound images with a large amount of speckle noise and multiple artifacts. The model [...] Read more.
The large scale and shape variation in breast lesions make their segmentation extremely challenging. A breast ultrasound image segmentation model integrating Mamba-CNN and feature interaction is proposed for breast ultrasound images with a large amount of speckle noise and multiple artifacts. The model first uses the visual state space model (VSS) as an encoder for feature extraction to better capture its long-range dependencies. Second, a hybrid attention enhancement mechanism (HAEM) is designed at the bottleneck between the encoder and the decoder to provide fine-grained control of the feature map in both the channel and spatial dimensions, so that the network captures key features and regions more comprehensively. The decoder uses transposed convolution to upsample the feature map, gradually increasing the resolution and recovering its spatial information. Finally, the cross-fusion module (CFM) is constructed to simultaneously focus on the spatial information of the shallow feature map as well as the deep semantic information, which effectively reduces the interference of noise and artifacts. Experiments are carried out on BUSI and UDIAT datasets, and the Dice similarity coefficient and HD95 indexes reach 76.04% and 20.28 mm, respectively, which show that the algorithm can effectively solve the problems of noise and artifacts in ultrasound image segmentation, and the segmentation performance is improved compared with the existing algorithms. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 1560 KB  
Article
Performance Comparison of U-Net and Its Variants for Carotid Intima–Media Segmentation in Ultrasound Images
by Seungju Jeong, Minjeong Park, Sumin Jeong and Dong Chan Park
Diagnostics 2026, 16(1), 2; https://doi.org/10.3390/diagnostics16010002 - 19 Dec 2025
Viewed by 369
Abstract
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid [...] Read more.
Background/Objectives: This study systematically compared the performance of U-Net and variants for automatic analysis of carotid intima-media thickness (CIMT) in ultrasound images, focusing on segmentation accuracy and real-time efficiency. Methods: Ten models were trained and evaluated using a publicly available Carotid Ultrasound Boundary Study (CUBS) dataset (2176 images from 1088 subjects). Images were preprocessed using histogram-based smoothing and resized to a resolution of 256 × 256 pixels. Model training was conducted using identical hyperparameters (50 epochs, batch size 8, Adam optimizer with a learning rate of 1 × 10−4, and binary cross-entropy loss). Segmentation accuracy was assessed using Dice, Intersection over Union (IoU), Precision, Recall, and Accuracy metrics, while real-time performance was evaluated based on training/inference times and the model parameter counts. Results: All models achieved high accuracy, with Dice/IoU scores above 0.80/0.67. Attention U-Net achieved the highest segmentation accuracy, while UNeXt demonstrated the fastest training/inference speeds (approximately 420,000 parameters). Qualitatively, UNet++ produced smooth and natural boundaries, highlighting its strength in boundary reconstruction. Additionally, the relationship between the model parameter count and Dice performance was visualized to illustrate the tradeoff between accuracy and efficiency. Conclusions: This study provides a quantitative/qualitative evaluation of the accuracy, efficiency, and boundary reconstruction characteristics of U-Net-based models for CIMT segmentation, offering guidance for model selection according to clinical requirements (accuracy vs. real-time performance). Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

36 pages, 7233 KB  
Article
Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
by K. E. ArunKumar, Matthew E. Wilson, Nathan E. Blake, Tylor J. Yost and Matthew Walker
Sensors 2025, 25(24), 7557; https://doi.org/10.3390/s25247557 - 12 Dec 2025
Viewed by 682
Abstract
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence [...] Read more.
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence (AI) tools based on pretrained models to segment lesions and detect breast cancer. The proposed workflow includes both the development of segmentation models and development of a series of classification models to classify ultrasound images as normal, benign or malignant. The pretrained models were trained and evaluated on the Breast Ultrasound Images (BUSI) dataset, a publicly available collection of grayscale breast ultrasound images with corresponding expert-annotated masks. For segmentation, images and ground-truth masks were used to pretrained encoder (ResNet18, EfficientNet-B0 and MobileNetV2)–decoder (U-Net, U-Net++ and DeepLabV3) models, including the DeepLabV3 architecture integrated with a Frequency-Domain Feature Enhancement Module (FEM). The proposed FEM improves spatial and spectral feature representations using Discrete Fourier Transform (DFT), GroupNorm, dropout regularization and adaptive fusion. For classification, each image was assigned a label (normal, benign or malignant). Optuna, an open-source software framework, was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder–decoder segmentation architecture. Five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3 and GoogleNet) were optimized for multiclass classification. DeepLabV3 outperformed other segmentation architectures, with consistent performance across training, validation and test images, with Dice Similarity Coefficient (DSC, a metric describing the overlap between predicted and true lesion regions) values of 0.87, 0.80 and 0.83 on training, validation and test sets, respectively. ResNet18:DeepLabV3 achieved an Intersection over Union (IoU) score of 0.78 during training, while ResNet18:U-Net++ achieved the best Dice coefficient (0.83) and IoU (0.71) and area under the curve (AUC, 0.91) scores on the test (unseen) dataset when compared to other models. However, the proposed Resnet18: FrequencyAwareDeepLabV3 (FADeepLabV3) achieved a DSC of 0.85 and an IoU of 0.72 on the test dataset, demonstrating improvements over standard DeepLabV3. Notably, the frequency-domain enhancement substantially improved the AUC from 0.90 to 0.98, indicating enhanced prediction confidence and clinical reliability. For classification, ResNet18 produced an F1 score—a measure combining precision and recall—of 0.95 and an accuracy of 0.90 on the training dataset, while InceptionV3 performed best on the test dataset, with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal transfer learning models on an imbalanced ultrasound image dataset. Full article
Show Figures

Figure 1

16 pages, 700 KB  
Review
Artificial Intelligence in Thermal Ablation: Current Applications and Future Directions in Microwave Technologies
by Kealan Westby, Daniel Westby, Kevin McKevitt and Brian M. Moloney
Biomimetics 2025, 10(12), 818; https://doi.org/10.3390/biomimetics10120818 - 5 Dec 2025
Viewed by 769
Abstract
Artificial intelligence (AI) is increasingly shaping interventional oncology, with growing interest in its application across thermal ablation modalities such as radiofrequency ablation (RFA), cryoablation, high-intensity focused ultrasound (HIFU), and microwave ablation (MWA). This review characterises the current landscape of AI-enhanced thermal ablation, with [...] Read more.
Artificial intelligence (AI) is increasingly shaping interventional oncology, with growing interest in its application across thermal ablation modalities such as radiofrequency ablation (RFA), cryoablation, high-intensity focused ultrasound (HIFU), and microwave ablation (MWA). This review characterises the current landscape of AI-enhanced thermal ablation, with particular emphasis on emerging opportunities within MWA technologies. We examine how AI-driven methods—convolutional neural networks, radiomics, and reinforcement learning—are being applied to optimise patient selection, automate image segmentation, predict treatment response, and support real-time procedural guidance. Comparative insights are provided across ablation modalities to contextualise the unique challenges and opportunities presented by microwave systems. Emphasis is placed on integrating AI into clinical workflows, ensuring safety, improving consistency, and advancing personalised therapy. Tables summarising AI methods and applications, a conceptual workflow figure, and a research gap analysis for MWA are included to guide future work. While existing applications remain largely investigational, the convergence of AI with advanced imaging and energy delivery holds significant promise for precision oncology. We conclude with a roadmap for research and clinical translation, highlighting the need for prospective validation, regulatory clarity, and interdisciplinary collaboration to support the adoption of AI-enabled thermal ablation into routine practice. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Biomedical Engineering: 2nd Edition)
Show Figures

Figure 1

24 pages, 3036 KB  
Article
MPG-SwinUMamba: High-Precision Segmentation and Automated Measurement of Eye Muscle Area in Live Sheep Based on Deep Learning
by Zhou Zhang, Yaojing Yue, Fuzhong Li, Leifeng Guo and Svitlana Pavlova
Animals 2025, 15(24), 3509; https://doi.org/10.3390/ani15243509 - 5 Dec 2025
Viewed by 364
Abstract
Accurate EMA assessment in live sheep is crucial for genetic breeding and production management within the meat sheep industry. However, the segmentation accuracy and reliability of existing automated methods are limited by challenges inherent to B-mode ultrasound images, such as low contrast and [...] Read more.
Accurate EMA assessment in live sheep is crucial for genetic breeding and production management within the meat sheep industry. However, the segmentation accuracy and reliability of existing automated methods are limited by challenges inherent to B-mode ultrasound images, such as low contrast and noise interference. To address these challenges, we present MPG-SwinUMamba, a novel deep learning-based segmentation network. This model uniquely combines the state-space model with a U-Net architecture. It also integrates an edge-enhancement multi-scale attention module (MSEE) and a pyramid attention refinement module (PARM) to improve the detection of indistinct boundaries and better capture global context. The global context aggregation decoder (GCAD) is employed to precisely reconstruct the segmentation mask, enabling automated measurement of the EMA. Compared to 12 other leading segmentation models, MPG-SwinUMamba achieved superior performance, with an intersection-over-union of 91.62% and a Dice similarity coefficient of 95.54%. Additionally, automated measurements show excellent agreement with expert manual assessments (correlation coefficient r = 0.9637), with a mean absolute percentage error of only 4.05%. This method offers non-invasive and efficient and objective evaluation of carcass performance in live sheep, with the potential to reduce measurement costs and enhance breeding efficiency. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

Back to TopTop