Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,033)

Search Parameters:
Keywords = diagnostic workflow

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 469 KB  
Case Report
Mycobacterium fortuitum: A Neglected Cause of Culture-Negative Prosthetic Valve Endocarditis and a Literature Review
by Selen Şahin, İrem Tümkaya Kılınç, Eda Yüksel, Çağla Mehmet, Bedia Dinç and Emine Alp Meşe
Infect. Dis. Rep. 2026, 18(2), 23; https://doi.org/10.3390/idr18020023 (registering DOI) - 13 Mar 2026
Abstract
Background/Objectives: Prosthetic valve endocarditis caused by non-tuberculous mycobacteria is a rare but serious condition and is often associated with delayed diagnosis due to initially negative routine blood cultures with late positivity after prolonged incubation. Mycobacterium fortuitum, a rapidly growing mycobacterium, is an [...] Read more.
Background/Objectives: Prosthetic valve endocarditis caused by non-tuberculous mycobacteria is a rare but serious condition and is often associated with delayed diagnosis due to initially negative routine blood cultures with late positivity after prolonged incubation. Mycobacterium fortuitum, a rapidly growing mycobacterium, is an uncommon cause of endocarditis but may result in significant morbidity if not promptly identified. Methods: We report a 67-year-old man with prior cardiac surgery who presented 18 months later with recurrent fever, weight loss, and renal dysfunction. Initial blood cultures, echocardiography, and standard imaging were non-diagnostic. Ongoing clinical suspicion prompted extended mycobacterial cultures with prolonged incubation and molecular identification performed at a reference laboratory, which revealed M. fortuitum. Results: Antimicrobial susceptibility testing demonstrated susceptibility to amikacin, ciprofloxacin, and clarithromycin, and treatment was initiated with an amikacin-based combination regimen. The patient showed marked clinical and laboratory improvement, including resolution of fever and stabilization of renal function. Conclusions: This case highlights the diagnostic and therapeutic challenges of M. fortuitum prosthetic valve endocarditis and underscores the limitations of routine diagnostic methods in culture-negative endocarditis. It also emphasizes the importance of prolonged incubation and targeted microbiological workflows in suspected cases. Full article
Show Figures

Figure 1

23 pages, 2115 KB  
Review
Artificial Intelligence in Cardiovascular Imaging: From Automated Acquisition to Precision Diagnostics and Clinical Decision Support
by Minodora Teodoru, Alexandra-Kristine Tonch-Cerbu, Dragoș Cozma, Cristina Văcărescu, Raluca-Daria Mitea, Florina Batâr, Horea-Laurentiu Onea, Florin-Leontin Lazăr and Alina Camelia Cătană
Med. Sci. 2026, 14(1), 132; https://doi.org/10.3390/medsci14010132 - 11 Mar 2026
Abstract
Cardiovascular imaging is a cornerstone of modern cardiology, yet its clinical impact is limited by operator dependence, inter-observer variability, time-consuming workflows, and unequal access to advanced expertise. Artificial intelligence (AI), particularly machine learning and deep learning, offers new opportunities to overcome these limitations. [...] Read more.
Cardiovascular imaging is a cornerstone of modern cardiology, yet its clinical impact is limited by operator dependence, inter-observer variability, time-consuming workflows, and unequal access to advanced expertise. Artificial intelligence (AI), particularly machine learning and deep learning, offers new opportunities to overcome these limitations. This review aims to summarize current and emerging AI applications in cardiovascular imaging and to evaluate their potential clinical value in precision diagnostics and decision support. This narrative review synthesizes clinically relevant literature on AI applications across major cardiovascular imaging modalities, including echocardiography, cardiovascular magnetic resonance, cardiac computed tomography, and nuclear cardiology. Evidence was analyzed with a focus on AI-enabled acquisition support, image segmentation, quantitative and functional assessment, workflow automation, and risk stratification, alongside key methodological and implementation considerations. Across imaging modalities, AI-driven approaches have demonstrated improved reproducibility, efficiency, and scalability of cardiovascular imaging workflows. Automated algorithms reduce operator dependence, facilitate standardized extraction of imaging biomarkers, and support advanced functional assessment and prognostic stratification. Recent developments in video-based, temporal, and multimodal models further expand AI capabilities from technical automation toward integrated disease phenotyping and personalized clinical decision support. However, translation into routine practice remains limited by heterogeneous datasets, insufficient external validation, algorithmic bias, limited interpretability, and challenges related to regulatory approval and workflow integration. Artificial intelligence has the potential to reshape cardiovascular imaging into a more efficient, reproducible, and patient-centered precision medicine tool. Real-world clinical impact will depend on outcome-driven evaluation, robust external validation, multimodal data integration, and human-in-the-loop implementation strategies that ensure safe, equitable, and clinically meaningful adoption. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Cardiovascular Medicine)
Show Figures

Figure 1

18 pages, 2234 KB  
Article
A Gated Attention-Based Multiple Instance Learning and Test-Time Augmentation Approach for Diagnosing Active Sacroiliitis in Sacroiliac Joint MRI Scans
by Zeynep Keskin, Onur İnan, Ömer Özberk, Reyhan Bilici, Sema Servi, Selma Özlem Çelikdelen and Mehmet Yıldırım
J. Clin. Med. 2026, 15(6), 2101; https://doi.org/10.3390/jcm15062101 - 10 Mar 2026
Viewed by 48
Abstract
Background and Objective: Axial spondyloarthritis (axSpA) is a group of chronic inflammatory diseases that primarily affect the sacroiliac joints. Early diagnosis is crucial for preventing irreversible structural damage. Magnetic Resonance Imaging (MRI) is the gold standard for detecting early inflammatory changes such as [...] Read more.
Background and Objective: Axial spondyloarthritis (axSpA) is a group of chronic inflammatory diseases that primarily affect the sacroiliac joints. Early diagnosis is crucial for preventing irreversible structural damage. Magnetic Resonance Imaging (MRI) is the gold standard for detecting early inflammatory changes such as sacroiliitis. However, conventional MRI interpretation is inherently subjective and susceptible to both intra- and inter-observer variability. Therefore, artificial intelligence (AI)-driven diagnostic solutions are increasingly being explored. Among them, the Gated Attention Multiple Instance Learning (MIL) framework holds strong potential in modeling heterogeneous inflammatory distributions, thanks to its slice-level attention mechanism. This study aims to evaluate the diagnostic performance of a deep learning model based on Gated Attention MIL for automated sacroiliitis detection. Furthermore, its results are compared with a baseline deep learning architecture (standard ResNet-18), and its consistency with radiologist annotations is analyzed. Materials and Methods: The dataset included 554 subjects, comprising 276 patients diagnosed with axSpA and 278 healthy controls. All MRI data were derived from axial T2-weighted fat-suppressed (T2_TSE_TRA_FS) sequences. Patient-wise data splitting was employed to construct training, validation, and independent test sets. The proposed model architecture integrates ResNet-18-based feature extraction, a gated attention mechanism for instance-level weighting, and bag-level classification. Additionally, Test-Time Augmentation (TTA) was implemented to enhance robustness during inference. Results: On the independent test set, the model achieved an accuracy of 85.88%, sensitivity of 92.86%, specificity of 79.07%, and an F1-score of 86.67%. Attention heatmaps generated by the MIL module showed strong spatial overlap with bone marrow edema regions annotated by expert radiologists. Implementation of TTA led to an approximate 10% improvement in overall classification accuracy. Conclusions: The Gated Attention MIL framework demonstrated high diagnostic performance for sacroiliitis detection, indicating its value as a reliable decision support tool for early axSpA diagnosis. Validation on larger, multi-center datasets is warranted to ensure generalizability and to support clinical integration in routine radiology workflows. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Graphical abstract

33 pages, 2576 KB  
Article
ExamQ-Gen: Instructor-in-the-Loop Generation of Self-Contained Exam Questions from Course Materials and Decision-Support Grading
by Catalin Anghel, Emilia Pecheanu, Andreea Alexandra Anghel, Marian Viorel Craciun and Adina Cocu
Computers 2026, 15(3), 177; https://doi.org/10.3390/computers15030177 - 9 Mar 2026
Viewed by 80
Abstract
Reliable evaluation of large language models (LLMs) for educational use requires benchmarks that reflect exam constraints, instructor grading practices, and the operational consequences of thresholded decisions. This paper introduces ExamQ-Gen, an instructor-in-the-loop benchmark that couples two tasks: (i) an LLM answering university-style exam [...] Read more.
Reliable evaluation of large language models (LLMs) for educational use requires benchmarks that reflect exam constraints, instructor grading practices, and the operational consequences of thresholded decisions. This paper introduces ExamQ-Gen, an instructor-in-the-loop benchmark that couples two tasks: (i) an LLM answering university-style exam questions and (ii) decision-support grading aligned with an instructor reference. Automatic grading is used for triage and feedback; in practice, ExamQ-Gen supports instructor-led exam authoring and provides grading recommendations, while the instructor issues the final grade and pass/fail decision. ExamQ-Gen is constructed from the course content by using an LLM to generate exam-style questions directly from the lecture materials, producing a course-derived question set suitable for controlled experimentation. The benchmark then instantiates contrasting exam conditions, including instructor-authored (HUMAN) versus pipeline-generated (PIPELINE) artifacts, to evaluate robustness under distribution shifts that can occur when exam questions and answers are produced through different generation workflows. Using two LLM “students” (Llama3-8B-Instruct and Mistral-7B-Instruct) and an LLM-based grader, we compare automatic grading against an instructor reference on a 1–10 score scale and at the decision level induced by the operational pass policy (pass if score ≥ 9). Accordingly, our conclusions are conditioned on the two evaluated student models. Score-level agreement is strong under HUMAN conditions but degrades substantially under PIPELINE conditions, indicating condition-dependent stability. At the pass threshold, decision errors are highly asymmetric, with false fails dominating false passes, meaning that conservative grading may appear safe while producing credit denial. A severity-focused analysis isolates a high-stakes failure mode—denial of instructor-perfect answers—and shows that, in the most affected PIPELINE condition, the perfect-pass miss rate reaches 0.926 (50/54), consistent with systematic conservatism rather than borderline noise. Overall, the results highlight that aggregate score agreement and accuracy are insufficient for instructor-controlled exam deployment and motivate reporting practices that combine disaggregated score agreement, threshold-based error asymmetry with uncertainty, and severity-aware diagnostics under exam-relevant condition shifts. Full article
Show Figures

Figure 1

31 pages, 5209 KB  
Review
AI-Driven Fault Detection and O&M for Wind Turbine Drivetrains: A Review of SCADA, CMS and Digital Twin Integration
by Ning Jia, Jiangzhe Feng, Zongyou Zuo, Zhiyi Liu, Tengyuan Wang, Chang Cai and Qingan Li
Energies 2026, 19(5), 1370; https://doi.org/10.3390/en19051370 - 7 Mar 2026
Viewed by 180
Abstract
The rapid expansion of wind energy has increased the operational complexity of wind turbines, where component degradation, environmental variability, and maintenance decisions are tightly coupled. Artificial intelligence (AI) has been widely applied to support fault detection and operation and maintenance (O&M), yet many [...] Read more.
The rapid expansion of wind energy has increased the operational complexity of wind turbines, where component degradation, environmental variability, and maintenance decisions are tightly coupled. Artificial intelligence (AI) has been widely applied to support fault detection and operation and maintenance (O&M), yet many existing studies remain fragmented and insufficiently address practical challenges such as heterogeneous data, sparse fault labels, and cross-site generalization. This review provides an engineering-oriented synthesis of AI-based methods for wind turbine fault detection and O&M, focusing on drivetrain diagnostics as a representative application. The literature is organized along an end-to-end O&M workflow, including SCADA-based condition monitoring, component-level fault diagnosis, health assessment and remaining useful life estimation, multi-modal blade inspection, and DT (Digital Twin) integration. Traditional ML (machine learning), ensemble methods, deep learning, physics-informed learning, and transfer learning are reviewed with respect to their data requirements, operational assumptions, and deployment constraints. Beyond algorithmic performance, this review discusses data governance, alarm design, model updating, and interpretability, and summarizes public datasets and emerging data resources. The aim is to bridge methodological advances and practical O&M requirements, supporting reliable and deployable AI applications in wind energy systems. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

29 pages, 2241 KB  
Review
Molecular Testing in Early Diagnosis and Clinical Assessment of Alzheimer’s Disease: A Narrative Review
by Zuzanna Rogacz, Wiktoria Pacuła, Barbara Strzałka-Mrozik and Artur Turek
Appl. Sci. 2026, 16(5), 2554; https://doi.org/10.3390/app16052554 - 6 Mar 2026
Viewed by 221
Abstract
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and one of the leading causes of dementia worldwide. With the increasing prevalence driven by population aging, there is a growing demand for early, accurate, and biologically grounded diagnostic approaches. Advances in molecular diagnostics have [...] Read more.
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and one of the leading causes of dementia worldwide. With the increasing prevalence driven by population aging, there is a growing demand for early, accurate, and biologically grounded diagnostic approaches. Advances in molecular diagnostics have created new opportunities for early disease detection, staging, and monitoring of therapeutic responses, reshaping contemporary diagnostic workflows. Validated cerebrospinal fluid biomarkers—amyloid-β, total tau, and phosphorylated tau—form the core of current biologically based diagnostic criteria, while blood-based biomarkers such as plasma p-tau and neurofilament light chain are gaining prominence due to their minimally invasive nature and scalability. Advanced imaging techniques, including amyloid and tau positron emission tomography, further enhance diagnostic accuracy and support differentiation of AD from other neurodegenerative disorders. Despite these advances, the clinical implementation of molecular diagnostics remains limited by methodological heterogeneity, biological variability, and the lack of standardized analytical and clinical frameworks. Addressing these translational challenges is essential for integrating molecular biomarkers into routine clinical practice and for enabling reliable, large-scale screening and early diagnosis of AD. Full article
Show Figures

Figure 1

23 pages, 5448 KB  
Article
Evidence-Guided Diagnostic Reasoning for Pediatric Chest Radiology Based on Multimodal Large Language Models
by Yuze Zhao, Qing Wang, Yingwen Wang, Ruiwei Zhao, Rui Feng and Xiaobo Zhang
J. Imaging 2026, 12(3), 111; https://doi.org/10.3390/jimaging12030111 - 6 Mar 2026
Viewed by 178
Abstract
Pediatric respiratory diseases are a leading cause of hospital admissions and childhood mortality worldwide, highlighting the critical need for accurate and timely diagnosis to support effective treatment and long-term care. Chest radiography remains the most widely used imaging modality for pediatric pulmonary assessment. [...] Read more.
Pediatric respiratory diseases are a leading cause of hospital admissions and childhood mortality worldwide, highlighting the critical need for accurate and timely diagnosis to support effective treatment and long-term care. Chest radiography remains the most widely used imaging modality for pediatric pulmonary assessment. Consequently, reliable AI-assisted diagnostic methods are essential for alleviating the workload of clinical radiologists. However, most existing deep learning-based approaches are data-driven and formulate diagnosis as a black-box image classification task, resulting in limited interpretability and reduced clinical trustworthiness. To address these challenges, we propose a trustworthy two-stage diagnostic paradigm for pediatric chest X-ray diagnosis that closely aligns with the radiological workflow in clinical practice, in which the diagnosis procedure is constrained by evidence. In the first stage, a vision–language model fine-tuned on pediatric data identifies radiological findings from chest radiographs, producing structured and interpretable diagnostic evidence. In the second stage, a multimodal large language model integrates the radiograph, extracted findings, patient demographic information, and external medical domain knowledge with RAG mechanism to generate the final diagnosis. Experiments conducted on the VinDr-PCXR dataset demonstrate that our method achieves 90.1% diagnostic accuracy, 70.9% F1-score, and 82.5% AUC, representing up to a 13.1% increase in diagnosis accuracy over the state-of-the-art baselines. These results validate the effectiveness of combining multimodal reasoning with explicit medical evidence and domain knowledge, and indicate the strong potential of the proposed approach for trustworthy pediatric radiology diagnosis. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

15 pages, 2031 KB  
Review
Artificial Intelligence in Venous Thromboembolism Prevention: A Narrative Review of Machine Learning, Deep Learning, and Natural Language Processing
by Daniela Nicoleta Crisan, Talida Georgiana Cut, Lucian-Flavius Herlo, Nina Ivanovic, Alexandra Herlo, Luana Alexandrescu, Andreea Sălcudean and Raluca Dumache
J. Cardiovasc. Dev. Dis. 2026, 13(3), 119; https://doi.org/10.3390/jcdd13030119 - 6 Mar 2026
Viewed by 212
Abstract
Venous thromboembolism (VTE), which includes deep vein thrombosis and pulmonary embolism, is a significant and preventable cause of morbidity and mortality worldwide. Despite the existence of clinical prediction models, biomarker-based risk assessments, and imaging techniques, gaps remain in accurately identifying and managing high-risk [...] Read more.
Venous thromboembolism (VTE), which includes deep vein thrombosis and pulmonary embolism, is a significant and preventable cause of morbidity and mortality worldwide. Despite the existence of clinical prediction models, biomarker-based risk assessments, and imaging techniques, gaps remain in accurately identifying and managing high-risk patients. In recent years, artificial intelligence has emerged as a transformative tool in healthcare, offering promising applications for enhancing VTE prevention strategies. This narrative review synthesizes current evidence on the use of artificial intelligence (AI) technologies including machine learning (ML), deep learning (DL), and natural language processing (NLP). We explore how supervised ML algorithms, such as random forests, support vector machines, and gradient boosting, improve predictive performance compared to traditional models by capturing complex, nonlinear relationships within electronic health record data. We also examine the role of DL models, particularly convolutional neural networks, in interpreting imaging data, achieving diagnostic accuracies comparable to expert radiologists. Additionally, the review highlights NLP applications in extracting risk-relevant information from unstructured clinical notes and the emerging integration of wearable device data and time-series analysis for dynamic risk assessment. We argue that the successful integration of AI into routine VTE prevention workflows requires rigorous prospective validation, cross-institutional collaboration, and thoughtful implementation into clinical decision support systems. Full article
Show Figures

Figure 1

12 pages, 1314 KB  
Article
Diagnostic and Prognostic Value of Serum Glial Fibrillary Acidic Protein in Acute Ischemic Stroke
by Luisa Agnello, Anna Maria Ciaccio, Fabio Del Ben, Mario Daidone, Gaetano Pacinella, Anna Masucci, Martina Tamburello, Caterina Maria Gambino, Antonino Tuttolomondo and Marcello Ciaccio
J. Clin. Med. 2026, 15(5), 1971; https://doi.org/10.3390/jcm15051971 - 4 Mar 2026
Viewed by 170
Abstract
Background: Acute ischemic stroke (AIS) remains a major cause of morbidity and mortality, with an unmet need for reliable blood-based biomarkers. Glial fibrillary acidic protein (GFAP), an astrocytic structural protein, is established in hemorrhagic stroke and traumatic brain injury, but its role in [...] Read more.
Background: Acute ischemic stroke (AIS) remains a major cause of morbidity and mortality, with an unmet need for reliable blood-based biomarkers. Glial fibrillary acidic protein (GFAP), an astrocytic structural protein, is established in hemorrhagic stroke and traumatic brain injury, but its role in AIS remains incompletely defined. Methods: In this retrospective case-control study, we enrolled AIS patients and healthy controls. Serum GFAP was measured within 24 h using the Lumipulse G1200 automated assay. Stroke severity and outcome were assessed with the National Institutes of Health Stroke Scale (NIHSS) and functional outcome with the modified Rankin Scale (mRS). Associations with clinical measures were explored using Spearman correlation, and diagnostic accuracy was determined by ROC analysis. Results: GFAP levels were significantly higher in AIS patients than controls (median 132.9 vs. 30.0 pg/mL, p < 0.001). The ROC analysis yielded an AUC of 0.88 (95% CI 0.81–0.96). A cutoff of 71 pg/mL achieved 74% sensitivity and 92% specificity, while 150 pg/mL and 32 pg/mL optimized positive and negative predictive values (95% and 96%). GFAP was correlated with stroke severity (NIHSS, ρ = 0.37–0.40, p < 0.001) and disability (mRS, ρ = 0.48–0.49, p < 0.001). No significant differences appeared across TOAST subtypes. Conclusions: Serum GFAP is significantly elevated in AIS and demonstrates strong diagnostic and prognostic value. Integration of GFAP into clinical workflows may enhance early stroke detection and outcome prediction, supporting its role as a promising biomarker in AIS. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

15 pages, 2505 KB  
Article
Performance Validation of ORTHOSEG, a Novel Artificial Intelligence Tool for the Segmentation of Orthopantomographs and Intra-Oral X-Rays
by Giuseppe Cota, Gaetano Scaramozzino, Marco Chiesa, Lelio Gennaro, Maurizio Pascadopoli, Andrea Scribante and Marco Colombo
Clin. Pract. 2026, 16(3), 54; https://doi.org/10.3390/clinpract16030054 - 4 Mar 2026
Viewed by 345
Abstract
Background: Dental radiographs are essential for diagnosis and treatment planning in modern dentistry. However, their manual interpretation is time-consuming and subject to variability, highlighting the need for automated tools to improve efficiency and consistency. This study aims to validate ORTHOSEG, a deep learning-based [...] Read more.
Background: Dental radiographs are essential for diagnosis and treatment planning in modern dentistry. However, their manual interpretation is time-consuming and subject to variability, highlighting the need for automated tools to improve efficiency and consistency. This study aims to validate ORTHOSEG, a deep learning-based system designed to automate the segmentation of anatomical, pathological, and non-pathological elements in radiographs, including orthopantomograms, bitewings, and periapical images. Methods: ORTHOSEG’s performance was evaluated using a rigorously curated dataset of 150 dental radiographs, including 50 orthopantomograms, 50 bitewings, and 50 periapical images, with manual annotations by expert clinicians serving as the ground truth. The system’s segmentation performance was assessed using standard evaluation metrics, including mean Dice Similarity Coefficient (mDSC) and mean Intersection over Union (mIoU), and inference time was also recorded. Results: The system achieved high accuracy, with mDSC and mIoU values of 0.635 ± 0.233 and 0.576 ± 0.214, respectively. In particular for orthopantomograms, it achieved an mDSC of 0.756 ± 0.174 and an mIoU of 0.684 ± 0.172, surpassing existing benchmarks. Its segmentation capabilities extend to approximately 70 distinct elements, underscoring its comprehensive utility. The system demonstrated efficient computational performance, with processing times of 19.745 ± 3.625 s for orthopantomograms, 8.467 ± 0.903 s for bitewings, and 5.653 ± 0.897 s for periapical radiographs on standard clinical hardware. Conclusions: ORTHOSEG demonstrates efficiency suitable for integration into routine workflows. This study confirms ORTHOSEG’s reliability and potential to improve diagnostic workflows, offering clinicians a valuable tool for faster and more detailed radiograph analysis. Future research will focus on extending validation across diverse clinical scenarios to ensure broader applicability. However, this study has limitations, including the use of a dataset derived from a European population and the absence of usability and clinical workflow evaluation, which should be addressed in future studies. Full article
(This article belongs to the Special Issue Clinical Outcome Research in the Head and Neck: 2nd Edition)
Show Figures

Figure 1

12 pages, 623 KB  
Article
Noninvasive Assessment of Hepatic Steatosis in Living Liver Donors
by Iman Al-Saleh, Hamad Alashgar, Ali Albenmousa, Ruba Alsaeed and Madiha Jamal
Diagnostics 2026, 16(5), 772; https://doi.org/10.3390/diagnostics16050772 - 4 Mar 2026
Viewed by 217
Abstract
Background & Aims: The accurate, noninvasive assessment of hepatic steatosis is essential in living liver donor evaluation, where disease prevalence is low, and donor safety is paramount. This study evaluated commonly used noninvasive diagnostic tools for detecting hepatic steatosis in a real-world donor [...] Read more.
Background & Aims: The accurate, noninvasive assessment of hepatic steatosis is essential in living liver donor evaluation, where disease prevalence is low, and donor safety is paramount. This study evaluated commonly used noninvasive diagnostic tools for detecting hepatic steatosis in a real-world donor screening setting. Methods: We analyzed 108 living liver donor candidates (18–53 years) with complete MRI, CT, transient elastography (FibroScan®), and biochemical data obtained during routine donor evaluation. Hepatic steatosis was defined as an MRI-proton density fat fraction (PDFF) ≥5%, which served as the noninvasive reference standard. Diagnostic performance metrics, receiver operating characteristic (ROC) analyses, and correlations with serum fibrosis indices (FIB-4 and APRI) were assessed. Results: MRI-PDFF identified hepatic steatosis in 21 donors (19.4%). Controlled attenuation parameter (CAP), measured by transient elastography, demonstrated high sensitivity (90.5%) and negative predictive value (97.1%), supporting its role as a rule-out screening tool. CT showed excellent specificity (97.7%) but lower sensitivity (61.9%), consistent with a confirmatory role when MRI is unavailable. Serum fibrosis indices were generally low and did not correlate strongly with imaging-based steatosis. Conclusions: In the low-prevalence setting of living liver donor evaluation, CAP-based transient elastography provides effective noninvasive screening for hepatic steatosis, while MRI-PDFF serves as a confirmatory reference when indicated. These findings support a stepwise, clinically practical diagnostic approach that prioritizes donor safety and workflow efficiency. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

30 pages, 29830 KB  
Article
From Hematoxylin and Eosin to Masson’s Trichrome: A Comprehensive Framework for Virtual Stain Transformation in Chronic Liver Disease Diagnosis
by Hossam Magdy Balaha, Khadiga M. Ali, Ali Mahmoud, Ahmed Aboudessouki, Mohamed T. Azam, Guruprasad A. Giridharan, Dibson Gondim and Ayman El-Baz
Diagnostics 2026, 16(5), 764; https://doi.org/10.3390/diagnostics16050764 - 4 Mar 2026
Viewed by 268
Abstract
Background/Objectives: Virtual histological staining offers a rapid, cost-effective alternative to physical reprocessing but faces challenges related to spatial misalignment and staining heterogeneity between Hematoxylin and Eosin (H&E) and Masson’s Trichrome (MT) domains. This study develops a robust framework for H&E-to-MT virtual staining [...] Read more.
Background/Objectives: Virtual histological staining offers a rapid, cost-effective alternative to physical reprocessing but faces challenges related to spatial misalignment and staining heterogeneity between Hematoxylin and Eosin (H&E) and Masson’s Trichrome (MT) domains. This study develops a robust framework for H&E-to-MT virtual staining to enable accurate fibrosis assessment without additional tissue consumption. Methods: We propose a transformer-based generative adversarial network (TbGAN) supported by a multi-stage alignment pipeline (SIFT (scale-invariant feature transform) coarse alignment, ORB/homography patch registration, and B-spline free-form deformation) and a weighted fusion mechanism combining four configuration outputs (O/10/3, O/3/10, R/10/3, and R/3/10). The framework was validated on 27 whole-slide images (>100,000 aligned patches) through 24 independent experiments. Results: The fused approach achieved state-of-the-art performance: MI = 0.9815 ± 0.0934, SSIM = 0.7474 ± 0.0597, NCC = 0.9320 ± 0.0220, and CS = 0.9946 ± 0.0014. Statistical analysis confirmed enhanced stability through narrower interquartile ranges, fewer outliers, and tighter 95% confidence intervals compared to individual configurations. Qualitative assessment demonstrated preserved collagen morphology critical for fibrosis staging. Conclusions: Our framework provides a reliable, IRB-compliant solution for virtual MT staining that maintains high structural fidelity suitable for diagnostic support. It enables resource-efficient fibrosis quantification and supports integration into clinical digital pathology workflows without patient-specific recalibration. Full article
Show Figures

Figure 1

25 pages, 1057 KB  
Review
Transforming Intracerebral Hemorrhage Care with Artificial Intelligence: Opportunities, Challenges, and Future Directions
by Qian Gao, Yujia Jin, Yuxuan Sun, Meng Jin, Lili Tang, Yuxiao Chen, Yutong She and Meng Li
Diagnostics 2026, 16(5), 752; https://doi.org/10.3390/diagnostics16050752 - 3 Mar 2026
Viewed by 453
Abstract
Spontaneous intracerebral hemorrhage (ICH) is associated with substantial mortality and morbidity. Current management paradigms rely heavily on the rapid interpretation of neuroimaging and clinical data, yet are frequently constrained by limitations in processing speed, diagnostic accuracy, and prognostic precision. Artificial intelligence (AI), specifically [...] Read more.
Spontaneous intracerebral hemorrhage (ICH) is associated with substantial mortality and morbidity. Current management paradigms rely heavily on the rapid interpretation of neuroimaging and clinical data, yet are frequently constrained by limitations in processing speed, diagnostic accuracy, and prognostic precision. Artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), offers transformative potential to circumvent these challenges across the entire continuum of ICH care. This comprehensive review synthesizes the rapidly evolving landscape of AI applications in ICH management. Through a systematic evaluation of recent literature, we examine studies focused on the development, validation, or critical appraisal of AI-driven technologies for ICH care. Our analysis encompasses automated neuroimaging, computer-assisted surgical navigation, brain–computer interfaces (BCIs), prognostic modeling, and fundamental research into disease mechanisms. AI has demonstrated performance comparable to that of clinical experts in automating hematoma segmentation, predicting complications such as hematoma expansion, and refining surgical planning via augmented reality. Furthermore, BCIs present innovative therapeutic avenues for motor rehabilitation. However, the translation of these technological advances into routine clinical practice is impeded by substantial challenges, including data heterogeneity, model opacity (“black-box” issues), workflow integration barriers, regulatory ambiguities, and ethical concerns surrounding accountability and algorithmic bias. The integration of AI into ICH care signifies a paradigm shift from standardized treatment protocols toward dynamic, precision medicine. Realizing this vision necessitates interdisciplinary collaboration to engineer robust, generalizable, and interpretable AI systems. Key priorities include the establishment of large-scale multimodal data repositories, the advancement of explainable AI (XAI) frameworks, the execution of rigorous prospective clinical trials to validate efficacy, and the implementation of adaptive regulatory and ethical guidelines. By systematically addressing these barriers, AI can evolve from a mere analytical tool into an indispensable clinical partner, ultimately optimizing patient outcomes. Full article
(This article belongs to the Special Issue Cerebrovascular Lesions: Diagnosis and Management, 2nd Edition)
Show Figures

Figure 1

17 pages, 277 KB  
Review
Artificial Intelligence Methods in Cephalometric Image Analysis—A Systematic Narrative Review
by Katarzyna Zaborowicz, Maciej Zaborowicz, Katarzyna Cieślińska and Barbara Biedziak
J. Clin. Med. 2026, 15(5), 1920; https://doi.org/10.3390/jcm15051920 - 3 Mar 2026
Viewed by 259
Abstract
Background: The dynamic development of information technologies, particularly in the fields of computer image analysis and artificial intelligence (AI) algorithms, plays an increasingly important role in orthodontic diagnostics. Cephalometric images constitute a fundamental element in orthodontic treatment planning. They contain encoded information related [...] Read more.
Background: The dynamic development of information technologies, particularly in the fields of computer image analysis and artificial intelligence (AI) algorithms, plays an increasingly important role in orthodontic diagnostics. Cephalometric images constitute a fundamental element in orthodontic treatment planning. They contain encoded information related to the assessment of craniofacial growth and development, which is the focus of algorithms employing machine learning and process automation. Objectives: The aim of this paper is to present the current state of knowledge regarding the application of artificial intelligence methods in cephalometric image analysis, with particular emphasis on studies published between 2020 and 2025 in the Scopus and Web of Science databases. Results: Twenty key studies were included. The most commonly used models were convolutional neural networks (CNN), You Only Look Once (YOLO), Bayesian convolutional neural networks (BCNN), artificial neural networks (ANN), stacked hourglass networks, and Deep Neural Patchworks (DNP). In landmark detection tasks, the average location errors ranged from 1 to 2 mm compared to expert annotations, remaining within clinically acceptable limits. YOLO- and CNN-based systems achieved accuracy comparable to that of experienced orthodontists, while BCNN models additionally provided uncertainty estimates that improved clinical interpretability. In classification tasks, artificial neural network (ANN) models assessing cervical vertebral maturity (CVM) achieved an accuracy of up to 95%. In screening studies prior to orthognathic surgery, a multilayer perceptron combined with a regional convolutional neural network achieved 96.3% agreement with expert decisions. Conclusions: AI-based tools provide clinically acceptable accuracy in cephalometric analysis, with landmark detection errors typically ranging from 1 to 2 mm compared to expert assessment. These systems improve repeatability and significantly reduce analysis time, especially when used in semi-automated workflows. AI-based assessment of cervical vertebral maturity and surgical eligibility shows high agreement with expert decisions, confirming their role as reliable tools to support clinical decision-making. Nevertheless, broader validation in different patient populations is necessary before routine clinical implementation. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
26 pages, 3326 KB  
Article
Designing an ICT-Based Digital Transformation Roadmap for Administrative Process Optimization in a Municipal Public Utility
by Oscar Moncayo Carreño, Cristian Zambrano-Vega, Byron Oviedo and Betty Briones Gavilanez
Systems 2026, 14(3), 270; https://doi.org/10.3390/systems14030270 - 3 Mar 2026
Viewed by 339
Abstract
Digital transformation in public institutions is increasingly understood as a socio-technical and organizational process rather than a purely technological upgrade. This study presents the design of an ICT-based digital transformation roadmap aimed at improving administrative efficiency and citizen service delivery in a municipal [...] Read more.
Digital transformation in public institutions is increasingly understood as a socio-technical and organizational process rather than a purely technological upgrade. This study presents the design of an ICT-based digital transformation roadmap aimed at improving administrative efficiency and citizen service delivery in a municipal public utility in Ecuador. A mixed-methods diagnostic approach was adopted, combining qualitative evidence from direct observation and a semi-structured interview with the head of the IT department, and quantitative data from a structured online survey administered to citizens. Baseline Key Performance Indicators (KPIs) were established using institutional records, service logs, and workflow analysis conducted over a three-month diagnostic window. Post-implementation KPI values are explicitly treated as ex ante projections, derived from process redesign analysis, benchmarking with comparable public utilities, and scenario-based assumptions, rather than empirically observed outcomes. The empirical results demonstrate high citizen readiness and acceptance of proposed digital services, including remote service portals, electronic invoicing, and automated support channels. The projected operational improvements—such as reductions in response and administrative processing times and increased digital transaction rates—are therefore presented as expected performance scenarios. A risk and alternative scenario analysis further examines how organizational constraints, resource availability, governance capacity, and change-management factors may moderate these outcomes. The study contributes a transparent and replicable framework for diagnosing digital readiness and planning ICT-driven transformation initiatives in resource-constrained public utilities, while emphasizing the need for future longitudinal validation using post-implementation data. Full article
Show Figures

Figure 1

Back to TopTop