Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,650)

Search Parameters:
Keywords = image segmentation technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2004 KB  
Article
MRI-Based Bladder Cancer Staging via YOLOv11 Segmentation and Deep Learning Classification
by Phisit Katongtung, Kanokwatt Shiangjen, Watcharaporn Cholamjiak and Krittin Naravejsakul
Diseases 2026, 14(2), 45; https://doi.org/10.3390/diseases14020045 - 28 Jan 2026
Abstract
Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains operator-dependent and subject to inter-observer variability. This study proposes an automated deep learning [...] Read more.
Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains operator-dependent and subject to inter-observer variability. This study proposes an automated deep learning framework for MRI-based bladder cancer staging to support standardized radiological interpretation. A sequential AI-based pipeline was developed, integrating hybrid tumor segmentation using YOLOv11 for lesion detection and DeepLabV3 for boundary refinement, followed by three deep learning classifiers (VGG19, ResNet50, and Vision Transformer) for MRI-based stage prediction. A total of 416 T2-weighted MRI images with radiology-derived stage labels (T1–T4) were included, with data augmentation applied during training. Model performance was evaluated using accuracy, precision, recall, F1-score, and multi-class AUC. Performance uncertainty was characterized using patient-level bootstrap confidence intervals under a fixed training and evaluation pipeline. All evaluated models demonstrated high and broadly comparable discriminative performance for MRI-based bladder cancer staging within the present dataset, with high point estimates of accuracy and AUC, particularly for differentiating non–muscle-invasive from muscle-invasive disease. Calibration analysis characterized the probabilistic behavior of predicted stage probabilities under the current experimental setting. The proposed framework demonstrates the feasibility of automated MRI-based bladder cancer staging derived from radiological reference labels and supports the potential of deep learning for standardizing and reproducing MRI-based staging procedures. Rather than serving as an independent clinical decision-support system, the framework is intended as a methodological and workflow-oriented tool for automated staging consistency. Further validation using multi-center datasets, patient-level data splitting prior to augmentation, pathology-confirmed reference standards, and explainable AI techniques is required to establish generalizability and clinical relevance. Full article
21 pages, 1574 KB  
Article
Watershed Encoder–Decoder Neural Network for Nuclei Segmentation of Breast Cancer Histology Images
by Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi and Attipoe David Sena
Bioengineering 2026, 13(2), 154; https://doi.org/10.3390/bioengineering13020154 - 28 Jan 2026
Abstract
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key [...] Read more.
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder–decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images–watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores. Full article
Show Figures

Figure 1

17 pages, 3814 KB  
Article
Advanced Digital Workflow for Lateral Orbitotomy in Orbital Dermoid Cysts: Integration of Point-of-Care Manufacturing and Intraoperative Navigation
by Gonzalo Ruiz-de-Leon, Manuel Tousidonis, Jose-Ignacio Salmeron, Ruben Perez-Mañanes, Sara Alvarez-Mokthari, Marta Benito-Anguita, Borja Gonzalez-Moure, Diego Fernandez-Acosta, Susana Gomez de los Infantes-Peña, Myriam Rodriguez-Rodriguez, Carlota Ortiz-Garcia, Ismael Nieva-Pascual, Pilar Cifuentes-Canorea, Jose-Luis Urcelay and Santiago Ochandiano
J. Clin. Med. 2026, 15(3), 937; https://doi.org/10.3390/jcm15030937 - 23 Jan 2026
Viewed by 93
Abstract
Background: Orbital dermoid cysts are common benign lesions; however, deep-seated or recurrent lesions near the orbital apex pose major surgical challenges due to their proximity to critical neurovascular structures. Lateral orbitotomy remains the reference approach, but accurate osteotomies and stable reconstruction can be [...] Read more.
Background: Orbital dermoid cysts are common benign lesions; however, deep-seated or recurrent lesions near the orbital apex pose major surgical challenges due to their proximity to critical neurovascular structures. Lateral orbitotomy remains the reference approach, but accurate osteotomies and stable reconstruction can be difficult to achieve using conventional techniques. This study reports our initial experience using a fully digital, hospital-based point-of-care (POC) workflow to enhance precision and safety in complex orbital dermoid cyst surgery. Methods: We present a case series of three patients with orbital dermoid cysts treated at a tertiary center (2024–2025) using a comprehensive digital workflow. Preoperative assessment included CT and/or MRI followed by virtual surgical planning (VSP) with orbit–tumor segmentation and 3D modeling. Cutting guides and patient-specific implants (PSIs) were manufactured in-house under a certified hospital-based POC protocol. Surgical strategies were tailored to each lesion and included piezoelectric osteotomy, intraoperative navigation, intraoperative CT, and structured-light scanning when indicated. Results: Complete en bloc resection was achieved in all cases without capsular rupture or optic nerve injury. Intraoperative CT confirmed complete lesion removal and accurate PSI positioning and fitting. Structured-light scanning enabled radiation-free postoperative monitoring when used. All patients preserved full ocular motility, visual acuity, and facial symmetry, with no complications or recurrences during follow-up. Conclusions: The integration of VSP, in-house POC manufacturing, and image-guided surgery within a lateral orbitotomy approach provides a reproducible and fully integrated workflow. This strategy appears to improve surgical precision and safety while supporting optimal long-term functional and aesthetic outcomes in challenging orbital dermoid cyst cases. Full article
Show Figures

Figure 1

45 pages, 2071 KB  
Systematic Review
Artificial Intelligence Techniques for Thyroid Cancer Classification: A Systematic Review
by Yanche Ari Kustiawan, Khairil Imran Ghauth, Sakina Ghauth, Liew Yew Toong and Sien Hui Tan
Mach. Learn. Knowl. Extr. 2026, 8(2), 27; https://doi.org/10.3390/make8020027 - 23 Jan 2026
Viewed by 325
Abstract
Artificial intelligence (AI), particularly machine learning and deep learning architectures, has been widely applied to support thyroid cancer diagnosis, but existing evidence on its performance and limitations remains scattered across techniques, tasks, and data types. This systematic review synthesizes recent work on knowledge [...] Read more.
Artificial intelligence (AI), particularly machine learning and deep learning architectures, has been widely applied to support thyroid cancer diagnosis, but existing evidence on its performance and limitations remains scattered across techniques, tasks, and data types. This systematic review synthesizes recent work on knowledge extraction from heterogeneous imaging and clinical data for thyroid cancer diagnosis and detection published between 2021 and 2025. We searched eight major databases, applied predefined inclusion and exclusion criteria, and assessed study quality using the Newcastle–Ottawa Scale. A total of 150 primary studies were included and analyzed with respect to AI techniques, diagnostic tasks, imaging and non-imaging modalities, model generalization, explainable AI, and recommended future directions. We found that deep learning, particularly convolutional neural networks, U-Net variants, and transformer-based models, dominated recent work, mainly for ultrasound-based benign–malignant classification, nodule detection, and segmentation, while classical machine learning, ensembles, and advanced paradigms remained important in specific structured-data settings. Ultrasound was the primary modality, complemented by cytology, histopathology, cross-sectional imaging, molecular data, and multimodal combinations. Key limitations included diagnostic ambiguity, small and imbalanced datasets, limited external validation, gaps in model generalization, and the use of largely non-interpretable black-box models with only partial use of explainable AI techniques. This review provides a structured, machine learning-oriented evidence map that highlights opportunities for more robust representation learning, workflow-ready automation, and trustworthy AI systems for thyroid oncology. Full article
(This article belongs to the Section Thematic Reviews)
Show Figures

Graphical abstract

23 pages, 13473 KB  
Article
Automatic Threshold Selection Guided by Maximizing Homologous Isomeric Similarity Under Unified Transformation Toward Unimodal Distribution
by Yaobin Zou, Wenli Yu and Qingqing Huang
Electronics 2026, 15(2), 451; https://doi.org/10.3390/electronics15020451 - 20 Jan 2026
Viewed by 732
Abstract
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric [...] Read more.
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric similarity under a unified transformation toward unimodal distribution. The primary objective is to establish a generalized selection criterion that functions independently of the input histogram’s pattern. The methodology employs bilateral filtering, non-maximum suppression, and Sobel operators to transform diverse histogram patterns into a unified, right-skewed unimodal distribution. Subsequently, the optimal threshold is determined by maximizing the normalized Renyi mutual information between the transformed edge image and binary contour images extracted at varying levels. Experimental validation on both synthetic and real-world images demonstrates that the proposed method offers greater adaptability and higher accuracy compared to representative thresholding and non-thresholding techniques. The results show a significant reduction in misclassification errors and improved correlation metrics, confirming the method’s effectiveness as a unified thresholding solution for images with non-modal, unimodal, bimodal, or multimodal histogram patterns. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition)
Show Figures

Figure 1

18 pages, 2295 KB  
Article
Automatic Retinal Nerve Fiber Segmentation and the Influence of Intersubject Variability in Ocular Parameters on the Mapping of Retinal Sites to the Pointwise Orientation Angles
by Diego Luján Villarreal and Adriana Leticia Vera-Tizatl
J. Imaging 2026, 12(1), 47; https://doi.org/10.3390/jimaging12010047 - 19 Jan 2026
Viewed by 174
Abstract
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF [...] Read more.
The current study investigates the influence of intersubject variability in ocular characteristics on the mapping of visual field (VF) sites to the pointwise directional angles in retinal nerve fiber layer (RNFL) bundle traces. In addition, the performance efficacy on the mapping of VF sites to the optic nerve head (ONH) was compared to ground truth baselines. Fundus photographs of 546 eyes of 546 healthy subjects (with no history of ocular disease or diabetic retinopathy) were enhanced digitally and RNFL bundle traces were segmented based on the Personalized Estimated Segmentation (PES) algorithm’s core technique. A 24-2 VF grid pattern was overlaid onto the photographs in order to relate VF test points to intersecting RNFL bundles. The PES algorithm effectively traced RNFL bundles in fundus images, achieving an average accuracy of 97.6% relative to the Jansonius map through the application of 10th-order Bezier curves. The PES algorithm assembled an average of 4726 RNFL bundles per fundus image based on 4975 sampling points, obtaining a total of 2,580,505 RNFL bundles based on 2,716,321 sampling points. The influence of ocular parameters could be evaluated for 34 out of 52 VF locations. The ONH-fovea angle and the ONH position in relation to the fovea were the most prominent predictors for variations in the mapping of retinal locations to the pointwise directional angle (p < 0.001). The variation explained by the model (R2 value) ranges from 27.6% for visual field location 15 to 77.8% in location 22, with a mean of 56%. Significant individual variability was found in the mapping of VF sites to the ONH, with a mean standard deviation (95% limit) of 16.55° (median 17.68°) for 50 out of 52 VF locations, ranging from less than 1° to 44.05°. The mean entry angles differed from previous baselines by a range of less than 1° to 23.9° (average difference of 10.6° ± 5.53°), and RMSE of 11.94. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

14 pages, 3133 KB  
Article
Three-Dimensional Modeling of Full-Diameter Micro–Nano Digital Rock Core Based on CT Scanning
by Changyuan Xia, Jingfu Shan, Yueli Li, Guowen Liu, Huanshan Shi, Penghui Zhao and Zhixue Sun
Processes 2026, 14(2), 337; https://doi.org/10.3390/pr14020337 - 18 Jan 2026
Viewed by 234
Abstract
Characterizing tight reservoirs is challenging due to the complex pore structure and strong heterogeneity at various scales. Current digital rock physics often struggles to reconcile high-resolution imaging with representative sample sizes, and 3D digital cores are frequently used primarily as visualization tools rather [...] Read more.
Characterizing tight reservoirs is challenging due to the complex pore structure and strong heterogeneity at various scales. Current digital rock physics often struggles to reconcile high-resolution imaging with representative sample sizes, and 3D digital cores are frequently used primarily as visualization tools rather than predictive, computable platforms. Thus, a clear methodological gap persists: high-resolution models typically lack macroscopic geological features, while existing 3D digital models are seldom leveraged for quantitative, predictive analysis. This study, based on a full-diameter core sample of a single lithology (gray-black shale), aims to bridge this gap by developing an integrated workflow to construct a high-fidelity, computable 3D model that connects the micro–nano to the macroscopic scale. The core was scanned using high-resolution X-ray computed tomography (CT) at 0.4 μm resolution. The raw CT images were processed through a dedicated pipeline to mitigate artifacts and noise, followed by segmentation using Otsu’s algorithm and region-growing techniques in Avizo 9.0 to isolate minerals, pores, and the matrix. The segmented model was converted into an unstructured tetrahedral finite element mesh within ANSYS 2024 Workbench, with quality control (aspect ratio ≤ 3; skewness ≤ 0.4), enabling mechanical property assignment and simulation. The digital core model was rigorously validated against physical laboratory measurements, showing excellent agreement with relative errors below 5% for key properties, including porosity (4.52% vs. 4.615%), permeability (0.0186 mD vs. 0.0192 mD), and elastic modulus (38.2 GPa vs. 39.5 GPa). Pore network analysis quantified the poor connectivity of the tight reservoir, revealing an average coordination number of 2.8 and a pore throat radius distribution of 0.05–0.32 μm. The presented workflow successfully creates a quantitatively validated “digital twin” of a full-diameter core. It provides a tangible solution to the scale-representativeness trade-off and transitions digital core analysis from a visualization tool to a computable platform for predicting key reservoir properties, such as permeability and elastic modulus, through numerical simulation, offering a robust technical means for the accurate evaluation of tight reservoirs. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

14 pages, 1068 KB  
Systematic Review
Use of CAD/CAM Workflow and Patient-Specific Implants for Maxillary Reconstruction: A Systematic Review
by Diana D’Alpaos, Giovanni Badiali, Francesco Ceccariglia, Ali Nosrati and Achille Tarsitano
J. Clin. Med. 2026, 15(2), 647; https://doi.org/10.3390/jcm15020647 - 13 Jan 2026
Viewed by 172
Abstract
Background: Reconstruction of the maxilla and midface remains one of the most demanding challenges in craniofacial surgery, requiring precise planning and a clear understanding of defect geometry to achieve functional and esthetic restoration. Advances in computer-assisted surgery (CAS) and virtual surgical planning [...] Read more.
Background: Reconstruction of the maxilla and midface remains one of the most demanding challenges in craniofacial surgery, requiring precise planning and a clear understanding of defect geometry to achieve functional and esthetic restoration. Advances in computer-assisted surgery (CAS) and virtual surgical planning (VSP), based on 3D segmentation of radiologic imaging, have significantly improved the management of maxillary deformities, allowing for further knowledge of patient-specific information, including anatomy, pathology, surgical planning, and reconstructive issues. The integration of computer-aided design and manufacturing (CAD/CAM) and 3D printing has further transformed reconstruction through customized titanium meshes, implants, and surgical guides. Methods:This systematic review, conducted following PRISMA 2020 guidelines, synthesizes evidence from clinical studies on CAD/CAM-assisted reconstruction of maxillary and midfacial defects of congenital, acquired, or post-resection origin. It highlights the advantages and drawbacks of maxillary reconstruction with patient-specific implants (PSISs). Primary outcomes are represented by accuracy in VSP reproduction, while secondary outcomes included esthetic results, functions, and assessment of complications. Results: Of the 44 identified articles, 10 met inclusion criteria with a time frame from April 2013 to July 2022. The outcomes of 24 treated patients are reported. CAD/CAM-guided techniques seemed to improve osteotomy accuracy, flap contouring, and implant adaptation. Conclusions: Although current data support the efficacy and safety of CAD/CAM-based approaches, limitations persist, including high costs, technological dependency, and variable long-term outcome data. This article critically evaluates the role of PSISs in maxillofacial reconstruction and outlines future directions for its standardization and broader adoption in clinical practice. Full article
(This article belongs to the Special Issue Innovations in Head and Neck Surgery)
Show Figures

Figure 1

28 pages, 3553 KB  
Article
GCN-Embedding Swin–Unet for Forest Remote Sensing Image Semantic Segmentation
by Pingbo Liu, Gui Zhang and Jianzhong Li
Remote Sens. 2026, 18(2), 242; https://doi.org/10.3390/rs18020242 - 12 Jan 2026
Viewed by 235
Abstract
Forest resources are among the most important ecosystems on the earth. The semantic segmentation and accurate positioning of ground objects in forest remote sensing (RS) imagery are crucial to the emergency treatment of forest natural disasters, especially forest fires. Currently, most existing methods [...] Read more.
Forest resources are among the most important ecosystems on the earth. The semantic segmentation and accurate positioning of ground objects in forest remote sensing (RS) imagery are crucial to the emergency treatment of forest natural disasters, especially forest fires. Currently, most existing methods for image semantic segmentation are built upon convolutional neural networks (CNNs). Nevertheless, these techniques face difficulties in directly accessing global contextual information and accurately detecting geometric transformations within the image’s target regions. This limitation stems from the inherent locality of convolution operations, which are restricted to processing data structured in Euclidean space and confined to square-shaped regions. Inspired by the graph convolution network (GCN) with robust capabilities in processing irregular and complex targets, as well as Swin Transformers renowned for exceptional global context modeling, we present a hybrid semantic segmentation framework for forest RS imagery termed GSwin–Unet. This framework embeds the GCN model into Swin–Unet architecture to address the issue of low semantic segmentation accuracy of RS imagery in forest scenarios, which is caused by the complex texture features, diverse shapes, and unclear boundaries of land objects. GSwin–Unet features a parallel dual-encoder architecture of GCN and Swin Transformer. First, we integrate the Zero-DCE (Zero-Reference Deep Curve Estimation) algorithm into GSwin–Unet to enhance forest RS image feature representation. Second, a feature aggregation module (FAM) is proposed to bridge the dual encoders by fusing GCN-derived local aggregated features with Swin Transformer-extracted features. Our study demonstrates that, compared with the baseline models TransUnet, Swin–Unet, Unet, and DeepLab V3+, the GSwin–Unet achieves improvements of 7.07%, 5.12%, 8.94%, and 2.69% in the mean Intersection over Union (MIoU) and 3.19%, 1.72%, 4.3%, and 3.69% in the average F1 score (Ave.F1), respectively, on the RGB forest RS dataset. On the NIRGB forest RS dataset, the improvements in MIoU are 5.75%, 3.38%, 6.79%, and 2.44%, and the improvements in Ave.F1 are 4.02%, 2.38%, 4.72%, and 1.67%, respectively. Meanwhile, GSwin–Unet shows excellent adaptability on the selected GID dataset with high forest coverage, where the MIoU and Ave.F1 reach 72.92% and 84.3%, respectively. Full article
Show Figures

Figure 1

23 pages, 1308 KB  
Article
MFA-Net: Multiscale Feature Attention Network for Medical Image Segmentation
by Jia Zhao, Han Tao, Song Liu, Meilin Li and Huilong Jin
Electronics 2026, 15(2), 330; https://doi.org/10.3390/electronics15020330 - 12 Jan 2026
Viewed by 180
Abstract
Medical image segmentation acts as a foundational element of medical image analysis. Yet its accuracy is frequently limited by the scale fluctuations of anatomical targets and the intricate contextual traits inherent in medical images—including vaguely defined structural boundaries and irregular shape distributions. To [...] Read more.
Medical image segmentation acts as a foundational element of medical image analysis. Yet its accuracy is frequently limited by the scale fluctuations of anatomical targets and the intricate contextual traits inherent in medical images—including vaguely defined structural boundaries and irregular shape distributions. To tackle these constraints, we design a multi-scale feature attention network (MFA-Net), customized specifically for thyroid nodule, skin lesion, and breast lesion segmentation tasks. This network framework integrates three core components: a Bidirectional Feature Pyramid Network (Bi-FPN), a Slim-neck structure, and the Convolutional Block Attention Module (CBAM). CBAM steers the model to prioritize boundary regions while filtering out irrelevant information, which in turn enhances segmentation precision. Bi-FPN facilitates more robust fusion of multi-scale features via iterative integration of top-down and bottom-up feature maps, supported by lateral and vertical connection pathways. The Slim-neck design is constructed to simplify the network’s architecture while effectively merging multi-scale representations of both target and background areas, thus enhancing the model’s overall performance. Validation across four public datasets covering thyroid ultrasound (TNUI-2021, TN-SCUI 2020), dermoscopy (ISIC 2016), and breast ultrasound (BUSI) shows that our method outperforms state-of-the-art segmentation approaches, achieving Dice similarity coefficients of 0.955, 0.971, 0.976, and 0.846, respectively. Additionally, the model maintains a compact parameter count of just 3.05 million and delivers an extremely fast inference latency of 1.9 milliseconds—metrics that significantly outperform those of current leading segmentation techniques. In summary, the proposed framework demonstrates strong performance in thyroid, skin, and breast lesion segmentation, delivering an optimal trade-off between high accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application: Second Edition)
Show Figures

Figure 1

18 pages, 3160 KB  
Article
Unleashing the Power of Dense Uncertainty Embeddings for More Efficient and Accurate Iris Recognition
by Haoyan Jiang, Siqi Guo, Yunlong Wang and Caiyong Wang
Electronics 2026, 15(2), 328; https://doi.org/10.3390/electronics15020328 - 12 Jan 2026
Viewed by 120
Abstract
Pixelwise dense representations are more prevalent in the field of iris recognition, also known as iris templates or IrisCodes. Almost all previous works of this kind are deterministic. To be specific, pixel-level representations are exclusively derived from certain point-by-point modeling, including filter responses, [...] Read more.
Pixelwise dense representations are more prevalent in the field of iris recognition, also known as iris templates or IrisCodes. Almost all previous works of this kind are deterministic. To be specific, pixel-level representations are exclusively derived from certain point-by-point modeling, including filter responses, phase correlations, and ordinal relations. Moreover, the binary mask indicating valid iris regions is solely determined by a fixed threshold or the output of standalone segmentation and localization algorithms. Uncertainty in acquisition factors in the process of iris imagery formation is not considered. In this paper, we propose a simple yet effective plug-and-play building block termed dual dense uncertainty embedding (D2UE), which can be seamlessly incorporated into deep learning (DL) frameworks that extract dense representations for iris recognition. D2UE has two pathways wherein both take dense feature maps of the backbone network as input. One pathway of D2UE predicts a variance-scaling map (VSM) and then applies it to an adaptive threshold-masking operation on the iris image. The dynamic threshold for each pixel in this manner is dependent on not only the intensity distribution of the iris image but also each pixel’s low-level uncertainty. The other pathway of D2UE adopts an over-parameterization technique and extracts uncertainty-embedded dense representations (UEDRs) by modeling each pixel’s contextual uncertainty. Extensive experiments on several iris datasets demonstrate that recognition performance under both within-database and cross-database settings can be significantly improved by incorporating D2UE into the baseline method. By integrating D2UE into various deep learning frameworks and evaluating their performance across multiple datasets, the results demonstrate that D2UE can be seamlessly incorporated into diverse architectures and can significantly enhance their recognition capabilities. D2UE only incurs slight computational overhead while surpassing a few SOTA methods with a large backbone network and much more training budget. Full article
(This article belongs to the Special Issue Biometric Recognition: Latest Advances and Prospects, 2nd Edition)
Show Figures

Graphical abstract

31 pages, 3167 KB  
Article
A Blockchain-Based Framework for Secure Healthcare Data Transfer and Disease Diagnosis Using FHM C-Means and LCK-CMS Neural Network
by Obada Al-Khatib, Ghalia Nassreddine, Amal El Arid, Abeer Elkhouly and Mohamad Nassereddine
Sci 2026, 8(1), 13; https://doi.org/10.3390/sci8010013 - 9 Jan 2026
Viewed by 332
Abstract
IoT-based blockchain technology has improved the healthcare system to ensure the privacy and security of healthcare data. A Blockchain Bridge (BB) is a tool that enables multiple blockchain networks to communicate with each other. The existing approach combining the classical and quantum blockchain [...] Read more.
IoT-based blockchain technology has improved the healthcare system to ensure the privacy and security of healthcare data. A Blockchain Bridge (BB) is a tool that enables multiple blockchain networks to communicate with each other. The existing approach combining the classical and quantum blockchain models failed to secure the data transmission during cross-chain communication. Thus, this study proposes a new BB verification for secure healthcare data transfer. Additionally, a brain tumor analysis framework is developed based on segmentation and neural networks. After the patient’s registration on the blockchain network, Brain Magnetic Resonance Imaging (MRI) data is encrypted using Hash-Keyed Quantum Cryptography and verified using a Peer-to-Peer Exchange model. The Brain MRI is preprocessed for brain tumor detection using the Fuzzy HaMan C-Means (FHMCM) segmentation technique. The features are extracted from the segmented image and classified using the LeCun Kaiming-based Convolutional ModSwish Neural Network (LCK-CMSNN) classifier. Subsequently, the brain tumor diagnosis report is securely transferred to the patient via a smart contract. The proposed model verified BB with a Verification Time (VT) of 12,541 ms, secured the input with a Security level (SL) of 98.23%, and classified the brain tumor with 99.15% accuracy, thus showing better performance than the existing models. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

40 pages, 16360 KB  
Review
Artificial Intelligence Meets Nail Diagnostics: Emerging Image-Based Sensing Platforms for Non-Invasive Disease Detection
by Tejrao Panjabrao Marode, Vikas K. Bhangdiya, Shon Nemane, Dhiraj Tulaskar, Vaishnavi M. Sarad, K. Sankar, Sonam Chopade, Ankita Avthankar, Manish Bhaiyya and Madhusudan B. Kulkarni
Bioengineering 2026, 13(1), 75; https://doi.org/10.3390/bioengineering13010075 - 8 Jan 2026
Viewed by 716
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such [...] Read more.
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such as anemia, diabetes, psoriasis, melanoma, and fungal diseases. This review presents the first big synthesis of image analysis for nail lesions incorporating AI/ML for diagnostic purposes. Where dermatological reviews to date have been more wide-ranging in scope, our review will focus specifically on diagnosis and screening related to nails. The various technological modalities involved (smartphone imaging, dermoscopy, Optical Coherence Tomography) will be presented, together with the different processing techniques for images (color corrections, segmentation, cropping of regions of interest), and models that range from classical methods to deep learning, with annotated descriptions of each. There will also be additional descriptions of AI applications related to some diseases, together with analytical discussions regarding real-world impediments to clinical application, including scarcity of data, variations in skin type, annotation errors, and other laws of clinical adoption. Some emerging solutions will also be emphasized: explainable AI (XAI), federated learning, and platform diagnostics allied with smartphones. Bridging the gap between clinical dermatology, artificial intelligence and mobile health, this review consolidates our existing knowledge and charts a path through yet others to scalable, equitable, and trustworthy nail based medically diagnostic techniques. Our findings advocate for interdisciplinary innovation to bring AI-enabled nail analysis from lab prototypes to routine healthcare and global screening initiatives. Full article
(This article belongs to the Special Issue Bioengineering in a Generative AI World)
Show Figures

Graphical abstract

33 pages, 4122 KB  
Article
Empirical Evaluation of UNet for Segmentation of Applicable Surfaces for Seismic Sensor Installation
by Mikhail Uzdiaev, Marina Astapova, Andrey Ronzhin and Aleksandra Figurek
J. Imaging 2026, 12(1), 34; https://doi.org/10.3390/jimaging12010034 - 8 Jan 2026
Viewed by 270
Abstract
The deployment of wireless seismic nodal systems necessitates the efficient identification of optimal locations for sensor installation, considering factors such as ground stability and the absence of interference. Semantic segmentation of satellite imagery has advanced significantly, and its application to this specific task [...] Read more.
The deployment of wireless seismic nodal systems necessitates the efficient identification of optimal locations for sensor installation, considering factors such as ground stability and the absence of interference. Semantic segmentation of satellite imagery has advanced significantly, and its application to this specific task remains unexplored. This work presents a baseline empirical evaluation of the U-Net architecture for the semantic segmentation of surfaces applicable for seismic sensor installation. We utilize a novel dataset of Sentinel-2 multispectral images, specifically labeled for this purpose. The study investigates the impact of pretrained encoders (EfficientNetB2, Cross-Stage Partial Darknet53—CSPDarknet53, and Multi-Axis Vision Transformer—MAxViT), different combinations of Sentinel-2 spectral bands (Red, Green, Blue (RGB), RGB+Near Infrared (NIR), 10-bands with 10 and 20 m/pix spatial resolution, full 13-band), and a technique for improving small object segmentation by modifying the input convolutional layer stride. Experimental results demonstrate that the CSPDarknet53 encoder generally outperforms the others (IoU = 0.534, Precision = 0.716, Recall = 0.635). The combination of RGB and Near-Infrared bands (10 m/pixel resolution) yielded the most robust performance across most configurations. Reducing the input stride from 2 to 1 proved beneficial for segmenting small linear objects like roads. The findings establish a baseline for this novel task and provide practical insights for optimizing deep learning models in the context of automated seismic nodal network installation planning. Full article
(This article belongs to the Special Issue Image Segmentation: Trends and Challenges)
Show Figures

Figure 1

27 pages, 712 KB  
Review
Segmentation and Classification of Lung Cancer Images Using Deep Learning
by Xiaoli Yang, Angchao Duan, Ziyan Jiang, Xiao Li, Chenchen Wang, Jiawen Wang and Jiayi Zhou
Appl. Sci. 2026, 16(2), 628; https://doi.org/10.3390/app16020628 - 7 Jan 2026
Viewed by 366
Abstract
Lung cancer ranks among the world’s most prevalent and deadly diseases. Early detection is crucial for improving patient survival rates. Computed tomography (CT) is a common method for lung cancer screening and diagnosis. With the advancement of computer-aided diagnosis (CAD) systems, deep learning [...] Read more.
Lung cancer ranks among the world’s most prevalent and deadly diseases. Early detection is crucial for improving patient survival rates. Computed tomography (CT) is a common method for lung cancer screening and diagnosis. With the advancement of computer-aided diagnosis (CAD) systems, deep learning (DL) technologies have been extensively explored to aid in interpreting CT images for lung cancer identification. Therefore, this review aims to comprehensively examine DL techniques developed for lung cancer screening and diagnosis. It explores various datasets that play a crucial role in lung cancer CT image segmentation and classification tasks, analyzing their differences in aspects such as scale. Next, various evaluation metrics for measuring model performance are discussed. The segmentation section details convolutional neural network-based (CNN-based) segmentation methods, segmentation approaches using U-shaped network (U-Net) architectures, and the application and improvements of Transformer models in this domain. The classification section covers CNN-based classification methods, classification methods incorporating attention mechanisms, Transformer-based classification methods, and ensemble learning approaches. Finally, the paper summarizes the development of segmentation and classification techniques for lung cancer CT images, identifies current challenges, and outlines future research directions in areas such as dataset annotation, multimodal dataset construction, multi-model fusion, and model interpretability. Full article
Show Figures

Figure 1

Back to TopTop