Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (857)

Search Parameters:
Keywords = deep tissue imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 6554 KB  
Article
Syncretic Grad-CAM Integrated ViT-CNN Hybrids with Inherent Explainability for Early Thyroid Cancer Diagnosis from Ultrasound
by Ahmed Y. Alhafdhi, Gibrael Abosamra and Abdulrhman M. Alshareef
Diagnostics 2026, 16(7), 999; https://doi.org/10.3390/diagnostics16070999 - 26 Mar 2026
Abstract
Background/Objectives: Accurate detection of thyroid cancer using ultrasound remains a challenge, as malignant nodules can be microscopic and heterogeneous, easily confused with point clusters and borderline-featured tissues. Current studies in deep learning demonstrate good performance with convolutional neural networks (CNNs) and clustering; however, [...] Read more.
Background/Objectives: Accurate detection of thyroid cancer using ultrasound remains a challenge, as malignant nodules can be microscopic and heterogeneous, easily confused with point clusters and borderline-featured tissues. Current studies in deep learning demonstrate good performance with convolutional neural networks (CNNs) and clustering; however, many approaches focus on local tissue and provide limited, non-quantitative interpretation, reducing clinical confidence. This study proposes an integrated framework combining enhanced convolutional feature encoders (DenseNet169 and VGG19) with an enhanced vision transformer (ViT-E) to integrate local feature and global relational context during learning, rather than delayed integration. Methods: The proposed framework integrates enhanced convolutional feature encoders (DenseNet169 and VGG19) with an enhanced vision transformer (ViT-E), enabling simultaneous learning of local feature representations and global relational context. This design allows feature fusion during the learning stage instead of delayed integration, aiming to improve diagnostic performance and interpretability in thyroid ultrasound image analysis. Results: The best-performing model, ViT-E–DenseNet169, achieved 98.5% accuracy, 98.9% sensitivity, 99.15% specificity, and 97.35% AUC, surpassing the robust basic hybrid model (CNN–XGBoost/ANN) and existing systems. A second contribution is improved interpretability, moving from mere illustration to validation. Gradient-weighted class activation mapping (Grad-CAM) maps demonstrated distinct and clinically understandable concentration patterns across various thyroid cancers: precise intralesional concentration for high-confidence malignancies (PTC = 0.968), edge/interface concentration for capsule risk patterns (PTC = 0.957), and broader-field activation consistent with infiltration concerns (PTC = 0.984), while benign scans showed low and diffuse activation (PTC = 0.002). Spatial audits reinforced this behavior (IoU/PAP: 0.72/91%, 0.65/78%, 0.58/62%). Conclusions: The integrated ViT-E–DenseNet169 framework provides highly accurate thyroid cancer detection while offering clinically meaningful interpretability through Grad-CAM-based spatial validation, supporting improved confidence in AI-assisted ultrasound diagnosis. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Medical Image Analysis)
Show Figures

Figure 1

21 pages, 38078 KB  
Article
Development and Evaluation of a Deep Learning Model for Ovarian Cancer Histotype Classification Using Whole-Slide Imaging
by Dagoberto Pulido and Nathalia Arias-Mendoza
J. Imaging 2026, 12(4), 144; https://doi.org/10.3390/jimaging12040144 - 25 Mar 2026
Viewed by 83
Abstract
The histopathological classification of ovarian carcinoma is fundamental for patient management. While microscopic evaluation by pathologists is the current diagnostic standard, it is known to be subject to interobserver variability, which can affect consistency in treatment decisions. This study addresses this clinical need [...] Read more.
The histopathological classification of ovarian carcinoma is fundamental for patient management. While microscopic evaluation by pathologists is the current diagnostic standard, it is known to be subject to interobserver variability, which can affect consistency in treatment decisions. This study addresses this clinical need by developing and validating a deep learning-based diagnostic support tool designed to enhance the objectivity and reproducibility of this classification. In this work, we address a key challenge in computational pathology—the tendency of attention mechanisms to overfit by concentrating on limited features—by systematically evaluating a direct regularization method within multiple instance learning (MIL) models. The models were trained and validated using 10-fold cross-validation on a public training set of 538 whole-slide images and further tested on an independent public dataset for the more challenging task of molecular subtype classification. We utilized features from a foundational model pre-trained on histopathology data to represent tissue morphology. Our findings demonstrate that directly regularizing the attention mechanism with a stochastic approach provides a statistically significant improvement in accuracy and generalization, highlighting its power as a robust technique to mitigate overfitting for this clinical task. In direct contrast to the reported variability in manual assessment, our final model achieved high consistency and accuracy, with a balanced accuracy of 0.854 and a Cohen’s Kappa of 0.791. The model also demonstrated strong generalization on the molecular classification task. Its attention mechanism provides visual heatmaps for pathologist review, fostering interpretability and trust. We have developed a highly accurate and generalizable artificial intelligence tool that directly addresses the challenge of interobserver variability in ovarian cancer classification. Its performance highlights the potential for artificial intelligence to serve as a decision support system, standardizing histopathological assessment. Full article
Show Figures

Figure 1

12 pages, 1895 KB  
Review
Artificial Intelligence CT Texture Radiomics for Outcome Prediction After EVAR: A Narrative Review
by Chiara Zanon, Giovanni Alfonso Chiariello, Tommaso D’Angelo and Emilio Quaia
Diagnostics 2026, 16(7), 964; https://doi.org/10.3390/diagnostics16070964 - 24 Mar 2026
Viewed by 146
Abstract
Background: Endovascular aneurysm repair (EVAR) requires lifelong imaging surveillance because endoleaks, aneurysm sac expansion, and severe adverse events occur in up to one-third of the patients. Conventional follow-up based on sac diameter and visual assessment may fail to detect early microstructural changes [...] Read more.
Background: Endovascular aneurysm repair (EVAR) requires lifelong imaging surveillance because endoleaks, aneurysm sac expansion, and severe adverse events occur in up to one-third of the patients. Conventional follow-up based on sac diameter and visual assessment may fail to detect early microstructural changes that precede clinical deterioration. Methods: This narrative review summarizes the current evidence on texture-based radiomics and artificial intelligence (AI) applied to computed tomography (CT) and CT angiography (CTA) for post-EVAR outcome prediction and surveillance. Original studies evaluating radiomic features and AI-based models for endoleak detection, aneurysm sac behavior, and EVAR-related adverse events were included and qualitatively synthesized. Results: Ten studies were included. Radiomic features describing texture heterogeneity, gray-level nonuniformity, entropy, and spatial complexity were extracted from the aneurysm sac, intraluminal thrombus, and perivascular adipose tissue. Machine learning and deep learning models achieved good to excellent performance, with reported AUC values ranging from 0.78 to 0.95 for predicting endoleaks, sac expansion, and severe adverse events. Texture-based radiomics consistently outperformed morphology-only assessments and showed complementary value to deep learning, including applications on non-contrast CT. Conclusions: CT texture radiomics combined with AI represents an emerging research approach with potential relevance for post-EVAR surveillance, although current evidence remains limited. By capturing tissue heterogeneity beyond conventional morphology, radiomics may enable the earlier detection of complications and support risk-adapted follow-up. However, the heterogeneity of methods limited external validation, and reproducibility issues remain major barriers to clinical translation. Full article
(This article belongs to the Special Issue Computed Tomography Imaging in Medical Diagnosis, 2nd Edition)
Show Figures

Figure 1

18 pages, 11885 KB  
Article
Dopant-Engineered Downshifting Nanoparticles with Dual NIR-II Fluorescence and Magnetic Resonance Imaging for Diagnosis and Image-Guided Surgery of Breast Cancer
by Zia Ullah, Mu Du, Lihong Jiang, Yibin Yan, Yuqian Yan, Jingsi Gu, Jing Cheng, Bing Guo and Zun Wang
Biosensors 2026, 16(3), 180; https://doi.org/10.3390/bios16030180 - 23 Mar 2026
Viewed by 209
Abstract
As surgery is the first-line paradigm for many solid tumors, precision in preoperative diagnosis and intraoperative imaging is of significant importance. Dual MRI and NIR-II fluorescence imaging could fulfill precision imaging requirements in treating cancers, because of its deep penetration and real-time high [...] Read more.
As surgery is the first-line paradigm for many solid tumors, precision in preoperative diagnosis and intraoperative imaging is of significant importance. Dual MRI and NIR-II fluorescence imaging could fulfill precision imaging requirements in treating cancers, because of its deep penetration and real-time high spatiotemporal resolution. Thus, the design of dual MRI/NIR-II fluorescence contrast agents is crucial for the diagnosis and surgery of cancers. Herein, we developed optically transparent NaGdF4 matrix-based downshifting nanoparticles (DSNPs) co-doped with Nd3+, Yb3+, and Er3+ as a single nanoplatform for dual NIR-II fluorescence and T1-weighted MRI. Systematic dopant engineering reveals that optimal Nd3+ loading enhances cascade Nd → Yb → Er energy transfer and yields intense NIR-II emission at 1334 and 1521 nm upon 808 nm excitation with a relative quantum yield of 1.55, while the presence of Gd3+ in the optically transparent matrix imparts strong T1 contrast (4.98 s−1 mM−1). The Pluronic F-127 surface coating confers colloidal stability and biocompatibility. In vitro assays confirm negligible cytotoxicity and efficient cellular uptake. In vivo studies in subcutaneous 4T1 tumor-bearing mice demonstrate robust accumulation, high tumor-to-background contrast in both MRI/NIR-II fluorescence and enable precise NIR-II fluorescence imaging-guided surgery with real-time margin visualization. Therefore, dopant-engineered DSNPs represent a promising dual-modal imaging agent for deep-tissue diagnostic and real-time surgical guidance in precision oncology. Full article
Show Figures

Figure 1

17 pages, 3640 KB  
Article
A 3D Global-Patch Transformer for Brain Age Prediction Using T1-Weighted MRI with Gray and White Matter Maps
by Seung-Jun Lee, Myungeun Lee, Yoo Ri Kim and Hyung-Jeong Yang
Appl. Sci. 2026, 16(6), 3004; https://doi.org/10.3390/app16063004 - 20 Mar 2026
Viewed by 103
Abstract
With the increasing prevalence of neurodegenerative diseases driven by population aging, imaging-based biomarkers are needed to quantify brain aging at an early stage. Brain age, which estimates structural brain aging relative to chronological age, has emerged as a useful indicator. Prior work has [...] Read more.
With the increasing prevalence of neurodegenerative diseases driven by population aging, imaging-based biomarkers are needed to quantify brain aging at an early stage. Brain age, which estimates structural brain aging relative to chronological age, has emerged as a useful indicator. Prior work has mainly used T1-weighted MRI with deep learning models such as convolutional neural networks (CNNs) or transformers; however, many approaches insufficiently capture three-dimensional structural continuity and localized anatomical patterns, and tissue-specific aging in gray matter (GM) and white matter (WM) is often treated as auxiliary. To address these limitations, we propose a 3D Global–Patch Transformer framework for brain age prediction that directly processes volumetric data while jointly learning global brain structure and local anatomical features. Our model runs global and patch pathways in parallel and explicitly incorporates GM and WM structural maps alongside T1-weighted MRI to encode tissue-specific aging signals. Experiments on multiple public datasets, including IXI and OASIS, show that the proposed method reduces mean absolute error (MAE) by approximately 10–15% compared with CNN-based and single-input transformer baselines, with notably improved performance in older populations, highlighting the value of tissue-level structural information for brain age estimation. Full article
(This article belongs to the Special Issue MR-Based Neuroimaging, 2nd Edition)
Show Figures

Figure 1

13 pages, 1466 KB  
Systematic Review
The Diagnostic Value of Indocyanine Green in the Assessment of Depth of Burn Injuries: A Systematic Review
by Marie K. Hilgarth, Samuel Knoedler, Gabriel Hundeshagen, Adriana C. Panayi, Bong-Sung Kim, Jochen-Frederick Hernekamp and Valentin F. M. Haug
Eur. Burn J. 2026, 7(1), 19; https://doi.org/10.3390/ebj7010019 - 19 Mar 2026
Viewed by 92
Abstract
Background: Accurate assessment of burn depth remains a clinical challenge and requires specific training. To improve diagnostic accuracy, various technical methods have been developed. This review summarizes current evidence on indocyanine green (ICG) fluorescence imaging for burn depth assessment and compares its performance [...] Read more.
Background: Accurate assessment of burn depth remains a clinical challenge and requires specific training. To improve diagnostic accuracy, various technical methods have been developed. This review summarizes current evidence on indocyanine green (ICG) fluorescence imaging for burn depth assessment and compares its performance with clinical, histological, and alternative modalities such as Laser Doppler imaging (LDI). Methods: A systematic literature search was conducted in PubMed/MEDLINE, Cochrane and Google Scholar to identify studies evaluating burn depth using ICG fluorescence imaging. Studies from 1995 to 2024 were included if they compared ICG to at least one reference method (clinical assessment, biopsy, or other technical modalities). Data extraction was performed independently by two reviewers. Risk of bias was assessed using the Newcastle–Ottawa Scale. The study selection workflow is shown in the PRISMA 2020 flow diagram for systematic reviews. Results: Nine studies with a total of 151 patients, published between 1995 and 2024, met the inclusion criteria. Results were synthesized descriptively due to substantial methodological heterogeneity. Two studies reported high accuracy of ICG fluorescence imaging for identifying nonviable tissue and supporting surgical planning, although differentiation between superficial and deep partial-thickness burns (SPTBs/DPTBs) was inconsistent. In one study, ICGA-guided assessment reduced or avoided excision in 10 of 20 burn sites (50%). Yet heterogeneity in measurement protocols, cut-off values, and reference standards limited comparability across studies. Conclusions: Due to its limited accuracy in differentiating SPTBs and DPTBs, ICG imaging has restricted utility for burn depth assessment, though it may still offer intraoperative benefit during necrosectomy. Registration: PROSPERO International prospective register of SRs by the National Institute of Health Research (CRD420251161190). Full article
Show Figures

Figure 1

25 pages, 6467 KB  
Review
Ultrasound Patches Toward Intelligent Theranostics: From Flexible Materials to Closed-Loop Biomedical Systems
by Jinpeng Zhao, Yi Huang, Yuan Zhang, Yuhang Xie, Wei Guo, Yang Li and Shidong Wang
Bioengineering 2026, 13(3), 345; https://doi.org/10.3390/bioengineering13030345 - 17 Mar 2026
Viewed by 287
Abstract
Ultrasound patches represent a transformative advancement beyond conventional ultrasonography, evolving into intelligent theranostic systems for personalized healthcare. This evolution is propelled by synergistic innovations in flexible piezoelectric materials and integrated designs. The development of piezoelectric polymers, lead-free ceramics, and bio-composite materials has laid [...] Read more.
Ultrasound patches represent a transformative advancement beyond conventional ultrasonography, evolving into intelligent theranostic systems for personalized healthcare. This evolution is propelled by synergistic innovations in flexible piezoelectric materials and integrated designs. The development of piezoelectric polymers, lead-free ceramics, and bio-composite materials has laid the foundation for long-term, conformal, and biosafe interfacing with the human body. Structurally, miniaturized transducer arrays (e.g., CMOS-integrated arrays achieving ~200 μm focal spots and 100 kPa focal pressure), multimodal integration, and bioinspired interfaces have enabled high-precision deep-tissue sensing and spatiotemporally controlled energy delivery—exemplified by strain-sensing feedback improving the signal-to-noise ratio by 5 dB for precise neuromodulation. These capabilities are converging to create closed-loop platforms, as demonstrated in continuous cardiovascular monitoring (up to 164 mm depth for 12 h), image-guided neuromodulation for neurological disorders, on-demand drug delivery (achieving 100% higher plasma concentration than ultrasound alone), and integrated tumor therapy with real-time feedback. Despite persistent challenges in material biocompatibility, energy efficiency, and clinical standardization, the future of ultrasound patches lies in their deep integration with multimodal sensing, machine learning, and adaptive control algorithms. This path will ultimately realize their potential for intelligent, closed-loop theranostics in chronic disease management, telemedicine, and personalized therapy. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

25 pages, 9628 KB  
Article
Real-Time Endoscopic Video Enhancement via Degradation Representation Estimation and Propagation
by Handing Xu, Zhenguo Nie, Tairan Peng and Xin-Jun Liu
J. Imaging 2026, 12(3), 134; https://doi.org/10.3390/jimaging12030134 - 16 Mar 2026
Viewed by 224
Abstract
Endoscopic images are often degraded by uneven illumination, motion blur, and tissue occlusion, which obscure critical anatomical details and complicate surgical manipulation. This issue is particularly pronounced in single-port endoscopic surgery, where the imaging capability of the camera is further constrained by limited [...] Read more.
Endoscopic images are often degraded by uneven illumination, motion blur, and tissue occlusion, which obscure critical anatomical details and complicate surgical manipulation. This issue is particularly pronounced in single-port endoscopic surgery, where the imaging capability of the camera is further constrained by limited working space. While deep learning-based enhancement methods have demonstrated impressive performance, most existing approaches remain too computationally demanding for real-time surgical use. To address this challenge, we propose an efficient stepwise endoscopic image enhancement framework that introduces an implicit degradation representation as an intermediate feature to guide the enhancement module toward high-quality results. The framework further exploits the temporal continuity of endoscopic videos, based on the assumption that image degradation evolves smoothly over short time intervals. Accordingly, high-quality degradation representations are estimated only on key frames at fixed intervals, while the representations for the remaining frames are obtained through fast inter-frame propagation, thereby significantly improving computational efficiency while maintaining enhancement quality. Experimental results demonstrate that our method achieves an excellent balance between enhancement quality and computational efficiency. Further evaluation on the downstream segmentation task suggests that our method substantially enhances the understanding of the surgical scene, validating that implicitly learning and degradation representation propagation offer a practical pathway for real-time clinical application. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

11 pages, 1610 KB  
Article
Pyogenic Spondylitis with Epidural Abscess Caused by Streptococcus suis Serotype 2 ST7: Tissue mNGS Confirmation and Whole-Genome Characterization of a Human Isolate
by Peiyan He, Henghui Wang, Ping Li, Yong Yan, Lei Gao and Lu Chen
Pathogens 2026, 15(3), 314; https://doi.org/10.3390/pathogens15030314 - 13 Mar 2026
Viewed by 252
Abstract
Streptococcus suis is an emerging zoonotic pathogen that typically causes bacteremia or meningitis in humans, whereas vertebral osteomyelitis with epidural abscess is exceedingly rare and may be missed. We describe a 65-year-old farmer with fever and severe low back pain after long-term bare-handed [...] Read more.
Streptococcus suis is an emerging zoonotic pathogen that typically causes bacteremia or meningitis in humans, whereas vertebral osteomyelitis with epidural abscess is exceedingly rare and may be missed. We describe a 65-year-old farmer with fever and severe low back pain after long-term bare-handed handling of raw pig lungs. Pre-treatment blood cultures yielded S. suis identified by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). After transient improvement on empirical therapy, fever recurred with worsening lumbar pain. Contrast-enhanced magnetic resonance imaging (MRI) demonstrated multilevel thoracolumbar pyogenic spondylitis with an epidural abscess and a sub-ligamentous abscess beneath the posterior longitudinal ligament (PLL) extending from L2 to L5. Computed tomography-guided lumbar biopsy followed by tissue metagenomic next-generation sequencing (mNGS) detected S. suis, providing concordant evidence supporting pathogen involvement at the vertebral focus. The bloodstream isolate (SS-JX2025-01) was serotype 2, sequence type 7 (ST7). It remained susceptible to β-lactams and glycopeptides but was resistant to macrolide–lincosamide and tetracycline classes, consistent with erm(B), tet(O), tet(40), and ant(6)-Ia detected by whole-genome sequencing (WGS). Virulence profiling revealed an epf+/sly+/mrp pattern with multiple adhesins and immune-evasion factors, whereas canonical 89K pathogenicity island markers were absent. Core-genome phylogeny placed SS-JX2025-01 within the Chinese ST7 lineage associated with previous outbreaks. This biopsy-supported case expands the clinical spectrum of invasive S. suis infection, highlights the value of tissue mNGS as an adjunct for supporting deep-seated foci in zoonotic infections, and underscores the importance of occupational prevention in small-scale farming households. Full article
(This article belongs to the Section Bacterial Pathogens)
Show Figures

Figure 1

24 pages, 5800 KB  
Article
Uncovering Hidden Prognostic Patterns in Colorectal Cancer Histology Using Unsupervised Learning: A Computational Pathology Study
by Wen-Tong Zhou, Yong Liu, Gang Yu, Kuan-Song Wang, Chao Xu, Jonathan Greenbaum, Chong Wu, Lin-Dong Jiang, Christopher J. Papasian, Hong-Mei Xiao and Hong-Wen Deng
Bioengineering 2026, 13(3), 334; https://doi.org/10.3390/bioengineering13030334 - 13 Mar 2026
Viewed by 323
Abstract
Colorectal cancer (CRC) remains a leading cause of cancer mortality globally, yet current histopathological diagnostics capture only limited features. This study aimed to discover subtle, prognostically significant histomorphological patterns in CRC tissues using unsupervised deep learning. We developed a framework integrating convolutional neural [...] Read more.
Colorectal cancer (CRC) remains a leading cause of cancer mortality globally, yet current histopathological diagnostics capture only limited features. This study aimed to discover subtle, prognostically significant histomorphological patterns in CRC tissues using unsupervised deep learning. We developed a framework integrating convolutional neural networks with deep clustering, trained on 23,341 image patches from 493 patients. We identified 30 distinct histomorphological clusters from CRC tissue images. Through univariate and multivariate survival analyses, three clusters (Cluster13, Cluster19, and Cluster24) were consistently associated with patient prognosis. These clusters were integrated with clinical factors (T stage, N stage, and differentiation degree) to construct a prognostic risk model. Patients stratified into high-risk and low-risk groups based on model predictions showed significant survival differences in both the training set (N = 493) and an independent validation set (N = 2590). Furthermore, logistic regression and multivariate Cox analyses demonstrated that incorporating the three histomorphological clusters alongside clinical factors yielded a modest but statistically significant improvement in predictive performance compared to clinical factors alone, indicating their complementary value for prognosis. This work demonstrates that computational pathology can uncover novel, visually elusive morphological features with independent prognostic value, offering potential to refine CRC patient stratification and inform clinical decision-making. Full article
Show Figures

Figure 1

20 pages, 3878 KB  
Article
A Hybrid Multimodal Cancer Diagnostic Framework Integrating Deep Learning of Histopathology and Whispering Gallery Mode Optical Sensors
by Shereen Afifi, Amir R. Ali, Nada Haytham Abdelbasset, Youssef Poulis, Yasmin Yousry, Mohamed Zinal, Hatem S. Abdullah, Miral Y. Selim and Mohamed Hamed
Diagnostics 2026, 16(6), 848; https://doi.org/10.3390/diagnostics16060848 - 12 Mar 2026
Viewed by 373
Abstract
Background/Objectives: Biopsy examination remains the gold standard for cancer diagnosis, relying on histopathological assessment of tissue samples to identify malignant changes. However, manual interpretation of histopathological slides is time-consuming, subjective, and susceptible to inter-observer variability. The digitization of histopathological images enables automated analysis [...] Read more.
Background/Objectives: Biopsy examination remains the gold standard for cancer diagnosis, relying on histopathological assessment of tissue samples to identify malignant changes. However, manual interpretation of histopathological slides is time-consuming, subjective, and susceptible to inter-observer variability. The digitization of histopathological images enables automated analysis and offers opportunities to support clinicians with more consistent and objective diagnostic tools. This study aims to enhance cancer diagnosis by proposing a hybrid framework that integrates deep-learning-based histopathological image analysis with Whispering Gallery Mode (WGM) optical sensing for complementary tissue characterization. Methods: The proposed framework combines automated tumor classification from histopathological images with biochemical signal analysis obtained from WGM optical sensors. Deep learning models, including EfficientNet-B0, InceptionV3, and Vision Transformer (ViT), were employed for binary and multi-class tumor classification using the BreakHis dataset. To address class imbalance, a Deep Convolutional Generative Adversarial Network (DCGAN) was utilized to generate synthetic histopathological images alongside conventional data augmentation techniques. In parallel, WGM optical sensors were incorporated to capture subtle tissue-specific signatures, with machine learning algorithms enabling automated feature extraction and classification of the acquired signals. Results: In multi-class classification, InceptionV3 combined with DCGAN-based augmentation achieved an accuracy of 94.45%, while binary classification reached 96.49%. Fine-tuned Vision Transformer models achieved a higher classification accuracy of 98% on the BreakHis dataset. The integration of WGM optical sensing provided additional biochemical information, offering complementary insights to image-based analysis and supporting more robust diagnostic decision-making. Conclusions: The proposed hybrid framework demonstrates the potential of combining deep-learning-based histopathological image analysis with WGM optical sensing to improve the accuracy and reliability of cancer classification. By integrating morphological and biochemical information, the framework offers a promising approach for enhanced, objective, and supportive cancer diagnostic systems. Full article
Show Figures

Figure 1

11 pages, 746 KB  
Article
Optical Coherence Tomography Angiography in Patients with Mixed Connective Tissue Disease
by Magdalena Szeretucha, Katarzyna Paczwa, Katarzyna Romanowska-Próchnicka, Sylwia Ornowska, Radosław Różycki and Joanna Gołębiewska
Biomedicines 2026, 14(3), 612; https://doi.org/10.3390/biomedicines14030612 - 9 Mar 2026
Viewed by 257
Abstract
Background: Mixed connective tissue disease (MCTD) is a rare systemic autoimmune disease which presents with clinical features that overlap with at least two connective tissue disorders, including systemic lupus erythematosus (SLE), systemic sclerosis (SSc), polymyositis (PM), dermatomyositis (DM), and rheumatoid arthritis (RA). [...] Read more.
Background: Mixed connective tissue disease (MCTD) is a rare systemic autoimmune disease which presents with clinical features that overlap with at least two connective tissue disorders, including systemic lupus erythematosus (SLE), systemic sclerosis (SSc), polymyositis (PM), dermatomyositis (DM), and rheumatoid arthritis (RA). It is characterized by the presence of anti-ribonucleoprotein (anti-U1RNP) antibodies. The mechanism of the vasculopathy associated with MCTD remains largely unknown. Optical coherence tomography angiography (OCTA) is a non-invasive imaging method of the microvasculature of the retina and choroid, providing the assessment of retinal perfusion. Objectives: The aim of the study was to evaluate the optical coherence tomography angiography (OCTA) parameters in patients with mixed connective tissue disease compared to healthy individuals. Methods: In this study, we compared the following parameters between patients with MCTD and healthy subjects: foveal avascular zone (FAZ), FAZ perimeter (PERIM), flow density (FD), choriocapillaris flow area (CCFA), outer retina flow area (ORFA), and foveal and parafoveal mean superficial and deep vessel density. Results: Parafoveal mean superficial vessel density and parafoveal mean deep vessel density were significantly lower in the MCTD group than in controls. The FAZ, FAZ PERIM, and FD values in the patients with MCTD were lower than in the control group and statistically significant for all parameters. Conclusions: The present study’s findings suggest the presence of ocular vascular abnormalities in patients suffering from MCTD. These abnormalities are characterized by decreased retinal vessel density and lower choriocapillaris flow. The results of the study demonstrate the significant role of OCTA in the diagnosis and monitoring of microvascular changes in patients with MCTD. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

26 pages, 770 KB  
Review
Artificial Intelligence in Reflectance Confocal Microscopy for Cutaneous Melanoma Computer-Assisted Detection: A Literature Review of Related Applications
by Luana Conte, Angela Filoni, Luca Schinzari, Ester Sofia Congedo, Lucia Pietroleonardo, Rocco Rizzo, Ugo De Giorgi, Donato Cascio, Giorgio De Nunzio and Maurizio Congedo
Appl. Biosci. 2026, 5(1), 20; https://doi.org/10.3390/applbiosci5010020 - 9 Mar 2026
Viewed by 247
Abstract
Cutaneous melanoma is one of the most aggressive skin cancers, and early diagnosis remains essential to reduce mortality. Reflectance Confocal Microscopy (RCM) provides non-invasive, quasi-histological images of the epidermis, dermoepidermal junction (DEJ), and dermis, enabling real-time assessment of melanocytic lesions. However, interpretation still [...] Read more.
Cutaneous melanoma is one of the most aggressive skin cancers, and early diagnosis remains essential to reduce mortality. Reflectance Confocal Microscopy (RCM) provides non-invasive, quasi-histological images of the epidermis, dermoepidermal junction (DEJ), and dermis, enabling real-time assessment of melanocytic lesions. However, interpretation still relies on expert visual evaluation, which is time-consuming and subjective. In this context, Artificial Intelligence (AI) and Computer-Assisted Detection (CAD) systems are emerging as valuable tools to improve diagnostic accuracy and reproducibility. This review summarizes research on AI applications in RCM imaging for melanoma, focusing on three major areas: delineation of skin strata, segmentation of tissues and morphological patterns, and classification of benign versus malignant lesions. Early approaches included Bayesian classifiers, wavelet-based decision trees, and logistic regression, while recent studies have employed support vector machines, random forests, and increasingly deep learning architectures such as convolutional and recurrent neural networks. The results demonstrate encouraging accuracy in DEJ localization, the segmentation of diagnostically relevant patterns, and the discrimination of melanoma from benign nevi. We distinguish the maturity of dermoscopy-based AI (AUC (ROC) > 0.80 on large multicenter cohorts) from the still-exploratory evidence for RCM-based AI. Nonetheless, current studies are often limited by small datasets, heterogeneous protocols, and a lack of multicenter validation. Overall, progress in AI applied to RCM supports the development of CAD systems that could assist clinicians during acquisition and diagnosis, reducing unnecessary biopsies and improving early melanoma detection. Future work should address standardization, dataset expansion, and the integration of advanced AI methods to move closer to clinical implementation. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Biosciences)
Show Figures

Figure 1

33 pages, 593 KB  
Review
AI-Driven Innovations for Quality Control and Standardization: Future Strategies in Adipose-Derived Stem Cell Manufacturing
by Riccardo Foti, Gabriele Storti, Marco Palmesano, Alessio Calicchia, Roberta Foti, Guido Ciprandi, Giulio Cervelli, Maria Giovanna Scioli, Augusto Orlandi and Valerio Cervelli
Int. J. Mol. Sci. 2026, 27(5), 2388; https://doi.org/10.3390/ijms27052388 - 4 Mar 2026
Viewed by 533
Abstract
Artificial intelligence (AI), including machine learning (ML) and deep learning (DL), is increasingly transforming the study, manufacturing, and clinical translation of adipose-derived stem/stromal cells (ADSCs). ADSC-based therapies face persistent challenges related to donor variability, heterogeneous cell populations, limited standardization of culture protocols, and [...] Read more.
Artificial intelligence (AI), including machine learning (ML) and deep learning (DL), is increasingly transforming the study, manufacturing, and clinical translation of adipose-derived stem/stromal cells (ADSCs). ADSC-based therapies face persistent challenges related to donor variability, heterogeneous cell populations, limited standardization of culture protocols, and the need for robust quality control (QC) and potency assessment under Good Manufacturing Practice (GMP) conditions. This review discusses how AI-driven approaches can support the ADSC pipeline from donor and tissue pre-screening, through isolation and expansion, to differentiation and batch release decisions. We highlight major methodological advances in computer vision and label-free imaging for monitoring morphology, confluency, proliferation, senescence, and contamination, as well as AI-assisted optimization strategies for culture parameters and differentiation protocols. In addition, we examine the growing role of multi-omics integration (transcriptomics, proteomics, metabolomics, and secretomics) combined with ML to predict functional potency, stratify donors, and identify biomarkers associated with therapeutic efficacy. Finally, we address current limitations, including data scarcity, inter-laboratory variability, model interpretability, and regulatory requirements, and outline future perspectives such as closed-loop bioprocess control, foundation models, and federated learning frameworks. Overall, AI offers a powerful toolkit to improve the reproducibility, safety, and scalability of ADSC manufacturing and to accelerate the development of standardized, data-driven regenerative medicine products. Full article
(This article belongs to the Special Issue New Insights in Translational Bioinformatics: Second Edition)
Show Figures

Figure 1

25 pages, 1948 KB  
Article
VDTAR-Net: A Cooperative Dual-Path Convolutional Neural Network–Transformer Network for Robust Highlight Reflection Segmentation
by Qianlong Zhang and Yue Zeng
Computers 2026, 15(3), 168; https://doi.org/10.3390/computers15030168 - 4 Mar 2026
Viewed by 248
Abstract
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent [...] Read more.
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent “object assumption.” Conversely, pure transformer models often lose high-frequency boundary details and incur substantial computational costs. To tackle these challenges, this paper introduces VDTAR-Net, a specialized framework adapted to address the unique optical characteristics of specular reflections. Building upon hybrid architectures, our contribution focuses on two core mechanisms: (1) a Cross-architecture Fusion Module (CFM) that enables deep, bidirectional information flow, allowing the Transformer’s global illumination modeling to continuously correct the CNN’s local texture biases; and (2) a Reflective-Aware Module (RAM), which explicitly integrates the physical prior of high-intensity saturation into the attention mechanism. This task-specific design significantly enhances sensitivity to boundary details in overexposed regions. We also created the first large-scale, expert-labeled cervical white light segmentation dataset, Cervix-WL-900. High-quality ground truth labels were generated through rigorous double-blind annotation and arbitration by senior experts. Experimental results show that VDTAR-Net achieves a Dice score of 92.56% and a mean Intersection over Union (mIoU) score of 87.31% on Cervix-WL-900, demonstrating superior performance compared to methods like U-Net, DeepLabv3+, SegFormer, and PSPNet. Ablation studies further confirm the substantial contributions of dual-path collaboration, CFM deep fusion, and RAM task-specific priors. VDTAR-Net provides a robust baseline for precise highlight segmentation, laying a foundation for subsequent image quality assessment, restoration, and feature decoupling in diagnostic models. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

Back to TopTop